2025-09-19 06:18:33.978612 | Job console starting 2025-09-19 06:18:33.997900 | Updating git repos 2025-09-19 06:18:34.072725 | Cloning repos into workspace 2025-09-19 06:18:34.275784 | Restoring repo states 2025-09-19 06:18:34.309429 | Merging changes 2025-09-19 06:18:34.309449 | Checking out repos 2025-09-19 06:18:34.613493 | Preparing playbooks 2025-09-19 06:18:35.194165 | Running Ansible setup 2025-09-19 06:18:39.319996 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-19 06:18:40.094012 | 2025-09-19 06:18:40.094171 | PLAY [Base pre] 2025-09-19 06:18:40.111372 | 2025-09-19 06:18:40.111507 | TASK [Setup log path fact] 2025-09-19 06:18:40.155026 | orchestrator | ok 2025-09-19 06:18:40.197504 | 2025-09-19 06:18:40.197763 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-19 06:18:40.244949 | orchestrator | ok 2025-09-19 06:18:40.261090 | 2025-09-19 06:18:40.262476 | TASK [emit-job-header : Print job information] 2025-09-19 06:18:40.316028 | # Job Information 2025-09-19 06:18:40.316241 | Ansible Version: 2.16.14 2025-09-19 06:18:40.316285 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-09-19 06:18:40.316327 | Pipeline: post 2025-09-19 06:18:40.316357 | Executor: 521e9411259a 2025-09-19 06:18:40.316443 | Triggered by: https://github.com/osism/testbed/commit/24e3d22d2253faadc72bec5801e865adde279d36 2025-09-19 06:18:40.316475 | Event ID: 752408c0-9520-11f0-9ea1-fcce68dbf547 2025-09-19 06:18:40.324466 | 2025-09-19 06:18:40.324649 | LOOP [emit-job-header : Print node information] 2025-09-19 06:18:40.470054 | orchestrator | ok: 2025-09-19 06:18:40.470369 | orchestrator | # Node Information 2025-09-19 06:18:40.470697 | orchestrator | Inventory Hostname: orchestrator 2025-09-19 06:18:40.470754 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-19 06:18:40.470780 | orchestrator | Username: zuul-testbed04 2025-09-19 06:18:40.470804 | orchestrator | Distro: Debian 12.12 2025-09-19 06:18:40.470855 | orchestrator | Provider: static-testbed 2025-09-19 06:18:40.470880 | orchestrator | Region: 2025-09-19 06:18:40.470903 | orchestrator | Label: testbed-orchestrator 2025-09-19 06:18:40.470951 | orchestrator | Product Name: OpenStack Nova 2025-09-19 06:18:40.471048 | orchestrator | Interface IP: 81.163.193.140 2025-09-19 06:18:40.492845 | 2025-09-19 06:18:40.492979 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-19 06:18:41.085872 | orchestrator -> localhost | changed 2025-09-19 06:18:41.096444 | 2025-09-19 06:18:41.096732 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-19 06:18:42.176839 | orchestrator -> localhost | changed 2025-09-19 06:18:42.204334 | 2025-09-19 06:18:42.204515 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-19 06:18:42.492777 | orchestrator -> localhost | ok 2025-09-19 06:18:42.501439 | 2025-09-19 06:18:42.501560 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-19 06:18:42.531876 | orchestrator | ok 2025-09-19 06:18:42.551165 | orchestrator | included: /var/lib/zuul/builds/56e5b89bf4ee4e74bb04767862d53916/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-19 06:18:42.560062 | 2025-09-19 06:18:42.560171 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-19 06:18:43.764570 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-19 06:18:43.764798 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/56e5b89bf4ee4e74bb04767862d53916/work/56e5b89bf4ee4e74bb04767862d53916_id_rsa 2025-09-19 06:18:43.764839 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/56e5b89bf4ee4e74bb04767862d53916/work/56e5b89bf4ee4e74bb04767862d53916_id_rsa.pub 2025-09-19 06:18:43.764867 | orchestrator -> localhost | The key fingerprint is: 2025-09-19 06:18:43.764892 | orchestrator -> localhost | SHA256:3wChn+3QNGknl/pKSkVGzVoifbugaLte/PKoiCICLMs zuul-build-sshkey 2025-09-19 06:18:43.764921 | orchestrator -> localhost | The key's randomart image is: 2025-09-19 06:18:43.764953 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-19 06:18:43.764975 | orchestrator -> localhost | | ...o | 2025-09-19 06:18:43.764997 | orchestrator -> localhost | | ..oo.=. | 2025-09-19 06:18:43.765018 | orchestrator -> localhost | | . ..O=+. | 2025-09-19 06:18:43.765039 | orchestrator -> localhost | | . Xo*. | 2025-09-19 06:18:43.765060 | orchestrator -> localhost | |. .S.*. . | 2025-09-19 06:18:43.765086 | orchestrator -> localhost | |o. o..= +. | 2025-09-19 06:18:43.765106 | orchestrator -> localhost | |+. . .+ + o | 2025-09-19 06:18:43.765126 | orchestrator -> localhost | |=E . ..o.= . | 2025-09-19 06:18:43.765146 | orchestrator -> localhost | |o.. ..+ooo+ | 2025-09-19 06:18:43.765166 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-19 06:18:43.765222 | orchestrator -> localhost | ok: Runtime: 0:00:00.585396 2025-09-19 06:18:43.773699 | 2025-09-19 06:18:43.773808 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-19 06:18:43.817951 | orchestrator | ok 2025-09-19 06:18:43.835712 | orchestrator | included: /var/lib/zuul/builds/56e5b89bf4ee4e74bb04767862d53916/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-19 06:18:43.849479 | 2025-09-19 06:18:43.849575 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-19 06:18:43.874315 | orchestrator | skipping: Conditional result was False 2025-09-19 06:18:43.891407 | 2025-09-19 06:18:43.891572 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-19 06:18:44.936520 | orchestrator | changed 2025-09-19 06:18:44.944574 | 2025-09-19 06:18:44.944735 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-19 06:18:45.286745 | orchestrator | ok 2025-09-19 06:18:45.306410 | 2025-09-19 06:18:45.306541 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-19 06:18:45.779711 | orchestrator | ok 2025-09-19 06:18:45.794162 | 2025-09-19 06:18:45.794338 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-19 06:18:46.249670 | orchestrator | ok 2025-09-19 06:18:46.271318 | 2025-09-19 06:18:46.272348 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-19 06:18:46.325856 | orchestrator | skipping: Conditional result was False 2025-09-19 06:18:46.338944 | 2025-09-19 06:18:46.339079 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-19 06:18:48.722418 | orchestrator -> localhost | changed 2025-09-19 06:18:48.757220 | 2025-09-19 06:18:48.757327 | TASK [add-build-sshkey : Add back temp key] 2025-09-19 06:18:49.972951 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/56e5b89bf4ee4e74bb04767862d53916/work/56e5b89bf4ee4e74bb04767862d53916_id_rsa (zuul-build-sshkey) 2025-09-19 06:18:49.973158 | orchestrator -> localhost | ok: Runtime: 0:00:00.026874 2025-09-19 06:18:49.979986 | 2025-09-19 06:18:49.980087 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-19 06:18:50.377210 | orchestrator | ok 2025-09-19 06:18:50.384649 | 2025-09-19 06:18:50.384746 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-19 06:18:50.418045 | orchestrator | skipping: Conditional result was False 2025-09-19 06:18:50.459017 | 2025-09-19 06:18:50.459117 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-19 06:18:50.831318 | orchestrator | ok 2025-09-19 06:18:50.846329 | 2025-09-19 06:18:50.846434 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-19 06:18:50.913014 | orchestrator | ok 2025-09-19 06:18:50.923045 | 2025-09-19 06:18:50.923139 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-19 06:18:51.817998 | orchestrator -> localhost | ok 2025-09-19 06:18:51.825143 | 2025-09-19 06:18:51.825243 | TASK [validate-host : Collect information about the host] 2025-09-19 06:18:53.249188 | orchestrator | ok 2025-09-19 06:18:53.286564 | 2025-09-19 06:18:53.286701 | TASK [validate-host : Sanitize hostname] 2025-09-19 06:18:53.355275 | orchestrator | ok 2025-09-19 06:18:53.360348 | 2025-09-19 06:18:53.360435 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-19 06:18:54.350901 | orchestrator -> localhost | changed 2025-09-19 06:18:54.355971 | 2025-09-19 06:18:54.356050 | TASK [validate-host : Collect information about zuul worker] 2025-09-19 06:18:55.036119 | orchestrator | ok 2025-09-19 06:18:55.040971 | 2025-09-19 06:18:55.041052 | TASK [validate-host : Write out all zuul information for each host] 2025-09-19 06:18:56.189366 | orchestrator -> localhost | changed 2025-09-19 06:18:56.200252 | 2025-09-19 06:18:56.200337 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-19 06:18:56.495855 | orchestrator | ok 2025-09-19 06:18:56.500579 | 2025-09-19 06:18:56.500681 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-19 06:19:36.150213 | orchestrator | changed: 2025-09-19 06:19:36.150429 | orchestrator | .d..t...... src/ 2025-09-19 06:19:36.150464 | orchestrator | .d..t...... src/github.com/ 2025-09-19 06:19:36.150490 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-19 06:19:36.150511 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-19 06:19:36.150532 | orchestrator | RedHat.yml 2025-09-19 06:19:36.167652 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-19 06:19:36.167671 | orchestrator | RedHat.yml 2025-09-19 06:19:36.167726 | orchestrator | = 1.53.0"... 2025-09-19 06:19:48.227402 | orchestrator | 06:19:48.227 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-19 06:19:48.408638 | orchestrator | 06:19:48.408 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-19 06:19:48.899123 | orchestrator | 06:19:48.898 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-19 06:19:49.301119 | orchestrator | 06:19:49.300 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-19 06:19:50.153783 | orchestrator | 06:19:50.153 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-19 06:19:50.226283 | orchestrator | 06:19:50.226 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-19 06:19:50.691403 | orchestrator | 06:19:50.691 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-19 06:19:50.691823 | orchestrator | 06:19:50.691 STDOUT terraform: Providers are signed by their developers. 2025-09-19 06:19:50.691834 | orchestrator | 06:19:50.691 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-19 06:19:50.691840 | orchestrator | 06:19:50.691 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-19 06:19:50.691866 | orchestrator | 06:19:50.691 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-19 06:19:50.691879 | orchestrator | 06:19:50.691 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-19 06:19:50.691984 | orchestrator | 06:19:50.691 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-19 06:19:50.692070 | orchestrator | 06:19:50.691 STDOUT terraform: you run "tofu init" in the future. 2025-09-19 06:19:50.693137 | orchestrator | 06:19:50.693 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-19 06:19:50.693236 | orchestrator | 06:19:50.693 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-19 06:19:50.693330 | orchestrator | 06:19:50.693 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-19 06:19:50.693351 | orchestrator | 06:19:50.693 STDOUT terraform: should now work. 2025-09-19 06:19:50.693449 | orchestrator | 06:19:50.693 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-19 06:19:50.693552 | orchestrator | 06:19:50.693 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-19 06:19:50.693638 | orchestrator | 06:19:50.693 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-19 06:19:50.806501 | orchestrator | 06:19:50.805 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-09-19 06:19:50.806571 | orchestrator | 06:19:50.805 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-19 06:19:51.029348 | orchestrator | 06:19:51.029 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-19 06:19:51.029400 | orchestrator | 06:19:51.029 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-19 06:19:51.029410 | orchestrator | 06:19:51.029 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-19 06:19:51.029415 | orchestrator | 06:19:51.029 STDOUT terraform: for this configuration. 2025-09-19 06:19:51.227659 | orchestrator | 06:19:51.227 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-09-19 06:19:51.227708 | orchestrator | 06:19:51.227 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-19 06:19:51.342098 | orchestrator | 06:19:51.336 STDOUT terraform: ci.auto.tfvars 2025-09-19 06:19:51.342154 | orchestrator | 06:19:51.342 STDOUT terraform: default_custom.tf 2025-09-19 06:19:51.449876 | orchestrator | 06:19:51.449 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-09-19 06:19:52.442284 | orchestrator | 06:19:52.442 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-19 06:19:52.975751 | orchestrator | 06:19:52.975 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-19 06:19:53.245547 | orchestrator | 06:19:53.241 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-19 06:19:53.245613 | orchestrator | 06:19:53.242 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-19 06:19:53.245759 | orchestrator | 06:19:53.245 STDOUT terraform:  + create 2025-09-19 06:19:53.245783 | orchestrator | 06:19:53.245 STDOUT terraform:  <= read (data resources) 2025-09-19 06:19:53.245889 | orchestrator | 06:19:53.245 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-19 06:19:53.245911 | orchestrator | 06:19:53.245 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-19 06:19:53.245942 | orchestrator | 06:19:53.245 STDOUT terraform:  # (config refers to values not yet known) 2025-09-19 06:19:53.245976 | orchestrator | 06:19:53.245 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-19 06:19:53.246009 | orchestrator | 06:19:53.245 STDOUT terraform:  + checksum = (known after apply) 2025-09-19 06:19:53.246092 | orchestrator | 06:19:53.246 STDOUT terraform:  + created_at = (known after apply) 2025-09-19 06:19:53.246099 | orchestrator | 06:19:53.246 STDOUT terraform:  + file = (known after apply) 2025-09-19 06:19:53.246105 | orchestrator | 06:19:53.246 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.246127 | orchestrator | 06:19:53.246 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.246155 | orchestrator | 06:19:53.246 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-19 06:19:53.246186 | orchestrator | 06:19:53.246 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-19 06:19:53.246208 | orchestrator | 06:19:53.246 STDOUT terraform:  + most_recent = true 2025-09-19 06:19:53.246236 | orchestrator | 06:19:53.246 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:53.246265 | orchestrator | 06:19:53.246 STDOUT terraform:  + protected = (known after apply) 2025-09-19 06:19:53.246292 | orchestrator | 06:19:53.246 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.246322 | orchestrator | 06:19:53.246 STDOUT terraform:  + schema = (known after apply) 2025-09-19 06:19:53.246349 | orchestrator | 06:19:53.246 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-19 06:19:53.246378 | orchestrator | 06:19:53.246 STDOUT terraform:  + tags = (known after apply) 2025-09-19 06:19:53.246407 | orchestrator | 06:19:53.246 STDOUT terraform:  + updated_at = (known after apply) 2025-09-19 06:19:53.246421 | orchestrator | 06:19:53.246 STDOUT terraform:  } 2025-09-19 06:19:53.246490 | orchestrator | 06:19:53.246 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-19 06:19:53.246519 | orchestrator | 06:19:53.246 STDOUT terraform:  # (config refers to values not yet known) 2025-09-19 06:19:53.246554 | orchestrator | 06:19:53.246 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-19 06:19:53.246582 | orchestrator | 06:19:53.246 STDOUT terraform:  + checksum = (known after apply) 2025-09-19 06:19:53.246618 | orchestrator | 06:19:53.246 STDOUT terraform:  + created_at = (known after apply) 2025-09-19 06:19:53.246647 | orchestrator | 06:19:53.246 STDOUT terraform:  + file = (known after apply) 2025-09-19 06:19:53.246678 | orchestrator | 06:19:53.246 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.246709 | orchestrator | 06:19:53.246 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.246746 | orchestrator | 06:19:53.246 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-19 06:19:53.246776 | orchestrator | 06:19:53.246 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-19 06:19:53.246809 | orchestrator | 06:19:53.246 STDOUT terraform:  + most_recent = true 2025-09-19 06:19:53.246834 | orchestrator | 06:19:53.246 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:53.246873 | orchestrator | 06:19:53.246 STDOUT terraform:  + protected = (known after apply) 2025-09-19 06:19:53.246897 | orchestrator | 06:19:53.246 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.246928 | orchestrator | 06:19:53.246 STDOUT terraform:  + schema = (known after apply) 2025-09-19 06:19:53.246958 | orchestrator | 06:19:53.246 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-19 06:19:53.246987 | orchestrator | 06:19:53.246 STDOUT terraform:  + tags = (known after apply) 2025-09-19 06:19:53.247016 | orchestrator | 06:19:53.246 STDOUT terraform:  + updated_at = (known after apply) 2025-09-19 06:19:53.247022 | orchestrator | 06:19:53.247 STDOUT terraform:  } 2025-09-19 06:19:53.247053 | orchestrator | 06:19:53.247 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-19 06:19:53.247082 | orchestrator | 06:19:53.247 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-19 06:19:53.247117 | orchestrator | 06:19:53.247 STDOUT terraform:  + content = (known after apply) 2025-09-19 06:19:53.247151 | orchestrator | 06:19:53.247 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 06:19:53.247185 | orchestrator | 06:19:53.247 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 06:19:53.247221 | orchestrator | 06:19:53.247 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 06:19:53.247256 | orchestrator | 06:19:53.247 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 06:19:53.247291 | orchestrator | 06:19:53.247 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 06:19:53.247325 | orchestrator | 06:19:53.247 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 06:19:53.247348 | orchestrator | 06:19:53.247 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 06:19:53.247384 | orchestrator | 06:19:53.247 STDOUT terraform:  + file_permission = "0644" 2025-09-19 06:19:53.247407 | orchestrator | 06:19:53.247 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-19 06:19:53.247442 | orchestrator | 06:19:53.247 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.247456 | orchestrator | 06:19:53.247 STDOUT terraform:  } 2025-09-19 06:19:53.247483 | orchestrator | 06:19:53.247 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-19 06:19:53.247508 | orchestrator | 06:19:53.247 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-19 06:19:53.247544 | orchestrator | 06:19:53.247 STDOUT terraform:  + content = (known after apply) 2025-09-19 06:19:53.247576 | orchestrator | 06:19:53.247 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 06:19:53.247610 | orchestrator | 06:19:53.247 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 06:19:53.247645 | orchestrator | 06:19:53.247 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 06:19:53.247681 | orchestrator | 06:19:53.247 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 06:19:53.247714 | orchestrator | 06:19:53.247 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 06:19:53.247748 | orchestrator | 06:19:53.247 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 06:19:53.247771 | orchestrator | 06:19:53.247 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 06:19:53.247793 | orchestrator | 06:19:53.247 STDOUT terraform:  + file_permission = "0644" 2025-09-19 06:19:53.247823 | orchestrator | 06:19:53.247 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-19 06:19:53.247888 | orchestrator | 06:19:53.247 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.247896 | orchestrator | 06:19:53.247 STDOUT terraform:  } 2025-09-19 06:19:53.247944 | orchestrator | 06:19:53.247 STDOUT terraform:  # local_file.inventory will be created 2025-09-19 06:19:53.247964 | orchestrator | 06:19:53.247 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-19 06:19:53.248001 | orchestrator | 06:19:53.247 STDOUT terraform:  + content = (known after apply) 2025-09-19 06:19:53.248036 | orchestrator | 06:19:53.247 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 06:19:53.248069 | orchestrator | 06:19:53.248 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 06:19:53.248104 | orchestrator | 06:19:53.248 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 06:19:53.248138 | orchestrator | 06:19:53.248 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 06:19:53.248171 | orchestrator | 06:19:53.248 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 06:19:53.248206 | orchestrator | 06:19:53.248 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 06:19:53.248229 | orchestrator | 06:19:53.248 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 06:19:53.248252 | orchestrator | 06:19:53.248 STDOUT terraform:  + file_permission = "0644" 2025-09-19 06:19:53.248281 | orchestrator | 06:19:53.248 STDOUT terraform:  + filename = "inventory.ci" 2025-09-19 06:19:53.248315 | orchestrator | 06:19:53.248 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.248322 | orchestrator | 06:19:53.248 STDOUT terraform:  } 2025-09-19 06:19:53.248352 | orchestrator | 06:19:53.248 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-19 06:19:53.248382 | orchestrator | 06:19:53.248 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-19 06:19:53.248414 | orchestrator | 06:19:53.248 STDOUT terraform:  + content = (sensitive value) 2025-09-19 06:19:53.248448 | orchestrator | 06:19:53.248 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 06:19:53.248481 | orchestrator | 06:19:53.248 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 06:19:53.248516 | orchestrator | 06:19:53.248 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 06:19:53.248550 | orchestrator | 06:19:53.248 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 06:19:53.248583 | orchestrator | 06:19:53.248 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 06:19:53.248618 | orchestrator | 06:19:53.248 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 06:19:53.248640 | orchestrator | 06:19:53.248 STDOUT terraform:  + directory_permission = "0700" 2025-09-19 06:19:53.248663 | orchestrator | 06:19:53.248 STDOUT terraform:  + file_permission = "0600" 2025-09-19 06:19:53.248691 | orchestrator | 06:19:53.248 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-19 06:19:53.248725 | orchestrator | 06:19:53.248 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.248731 | orchestrator | 06:19:53.248 STDOUT terraform:  } 2025-09-19 06:19:53.248762 | orchestrator | 06:19:53.248 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-19 06:19:53.248791 | orchestrator | 06:19:53.248 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-19 06:19:53.248813 | orchestrator | 06:19:53.248 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.248819 | orchestrator | 06:19:53.248 STDOUT terraform:  } 2025-09-19 06:19:53.248880 | orchestrator | 06:19:53.248 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-19 06:19:53.248929 | orchestrator | 06:19:53.248 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-19 06:19:53.248959 | orchestrator | 06:19:53.248 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.248982 | orchestrator | 06:19:53.248 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.249016 | orchestrator | 06:19:53.248 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.249050 | orchestrator | 06:19:53.249 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:53.249084 | orchestrator | 06:19:53.249 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.249127 | orchestrator | 06:19:53.249 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-19 06:19:53.249161 | orchestrator | 06:19:53.249 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.249183 | orchestrator | 06:19:53.249 STDOUT terraform:  + size = 80 2025-09-19 06:19:53.249206 | orchestrator | 06:19:53.249 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.249230 | orchestrator | 06:19:53.249 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.249236 | orchestrator | 06:19:53.249 STDOUT terraform:  } 2025-09-19 06:19:53.249283 | orchestrator | 06:19:53.249 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-19 06:19:53.249326 | orchestrator | 06:19:53.249 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 06:19:53.249362 | orchestrator | 06:19:53.249 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.249385 | orchestrator | 06:19:53.249 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.249419 | orchestrator | 06:19:53.249 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.249454 | orchestrator | 06:19:53.249 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:53.249488 | orchestrator | 06:19:53.249 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.249531 | orchestrator | 06:19:53.249 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-19 06:19:53.249565 | orchestrator | 06:19:53.249 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.249586 | orchestrator | 06:19:53.249 STDOUT terraform:  + size = 80 2025-09-19 06:19:53.249609 | orchestrator | 06:19:53.249 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.249634 | orchestrator | 06:19:53.249 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.249649 | orchestrator | 06:19:53.249 STDOUT terraform:  } 2025-09-19 06:19:53.249696 | orchestrator | 06:19:53.249 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-19 06:19:53.249742 | orchestrator | 06:19:53.249 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 06:19:53.249777 | orchestrator | 06:19:53.249 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.249800 | orchestrator | 06:19:53.249 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.249836 | orchestrator | 06:19:53.249 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.249893 | orchestrator | 06:19:53.249 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:53.249946 | orchestrator | 06:19:53.249 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.249990 | orchestrator | 06:19:53.249 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-19 06:19:53.250041 | orchestrator | 06:19:53.249 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.250061 | orchestrator | 06:19:53.250 STDOUT terraform:  + size = 80 2025-09-19 06:19:53.250086 | orchestrator | 06:19:53.250 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.250110 | orchestrator | 06:19:53.250 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.250124 | orchestrator | 06:19:53.250 STDOUT terraform:  } 2025-09-19 06:19:53.250171 | orchestrator | 06:19:53.250 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-19 06:19:53.250215 | orchestrator | 06:19:53.250 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 06:19:53.250250 | orchestrator | 06:19:53.250 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.250277 | orchestrator | 06:19:53.250 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.250313 | orchestrator | 06:19:53.250 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.250348 | orchestrator | 06:19:53.250 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:53.250396 | orchestrator | 06:19:53.250 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.250463 | orchestrator | 06:19:53.250 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-19 06:19:53.250519 | orchestrator | 06:19:53.250 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.250568 | orchestrator | 06:19:53.250 STDOUT terraform:  + size = 80 2025-09-19 06:19:53.250575 | orchestrator | 06:19:53.250 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.250596 | orchestrator | 06:19:53.250 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.250612 | orchestrator | 06:19:53.250 STDOUT terraform:  } 2025-09-19 06:19:53.250657 | orchestrator | 06:19:53.250 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-19 06:19:53.250704 | orchestrator | 06:19:53.250 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 06:19:53.250738 | orchestrator | 06:19:53.250 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.250763 | orchestrator | 06:19:53.250 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.250798 | orchestrator | 06:19:53.250 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.250835 | orchestrator | 06:19:53.250 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:53.250900 | orchestrator | 06:19:53.250 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.250935 | orchestrator | 06:19:53.250 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-19 06:19:53.250970 | orchestrator | 06:19:53.250 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.250992 | orchestrator | 06:19:53.250 STDOUT terraform:  + size = 80 2025-09-19 06:19:53.251016 | orchestrator | 06:19:53.250 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.251040 | orchestrator | 06:19:53.251 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.251046 | orchestrator | 06:19:53.251 STDOUT terraform:  } 2025-09-19 06:19:53.251096 | orchestrator | 06:19:53.251 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-19 06:19:53.251144 | orchestrator | 06:19:53.251 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 06:19:53.251177 | orchestrator | 06:19:53.251 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.251202 | orchestrator | 06:19:53.251 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.251239 | orchestrator | 06:19:53.251 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.251275 | orchestrator | 06:19:53.251 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:53.251311 | orchestrator | 06:19:53.251 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.251354 | orchestrator | 06:19:53.251 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-19 06:19:53.251389 | orchestrator | 06:19:53.251 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.251409 | orchestrator | 06:19:53.251 STDOUT terraform:  + size = 80 2025-09-19 06:19:53.251433 | orchestrator | 06:19:53.251 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.251457 | orchestrator | 06:19:53.251 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.251463 | orchestrator | 06:19:53.251 STDOUT terraform:  } 2025-09-19 06:19:53.251511 | orchestrator | 06:19:53.251 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-19 06:19:53.251555 | orchestrator | 06:19:53.251 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 06:19:53.251590 | orchestrator | 06:19:53.251 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.251613 | orchestrator | 06:19:53.251 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.251647 | orchestrator | 06:19:53.251 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.251681 | orchestrator | 06:19:53.251 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:53.251715 | orchestrator | 06:19:53.251 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.251759 | orchestrator | 06:19:53.251 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-19 06:19:53.251794 | orchestrator | 06:19:53.251 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.251813 | orchestrator | 06:19:53.251 STDOUT terraform:  + size = 80 2025-09-19 06:19:53.251837 | orchestrator | 06:19:53.251 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.251872 | orchestrator | 06:19:53.251 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.251885 | orchestrator | 06:19:53.251 STDOUT terraform:  } 2025-09-19 06:19:53.251929 | orchestrator | 06:19:53.251 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-19 06:19:53.251981 | orchestrator | 06:19:53.251 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:53.252008 | orchestrator | 06:19:53.251 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.252029 | orchestrator | 06:19:53.252 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.252065 | orchestrator | 06:19:53.252 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.252100 | orchestrator | 06:19:53.252 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.252138 | orchestrator | 06:19:53.252 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-19 06:19:53.252173 | orchestrator | 06:19:53.252 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.252193 | orchestrator | 06:19:53.252 STDOUT terraform:  + size = 20 2025-09-19 06:19:53.252217 | orchestrator | 06:19:53.252 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.252241 | orchestrator | 06:19:53.252 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.252247 | orchestrator | 06:19:53.252 STDOUT terraform:  } 2025-09-19 06:19:53.252292 | orchestrator | 06:19:53.252 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-19 06:19:53.252334 | orchestrator | 06:19:53.252 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:53.252368 | orchestrator | 06:19:53.252 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.252391 | orchestrator | 06:19:53.252 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.252426 | orchestrator | 06:19:53.252 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.252460 | orchestrator | 06:19:53.252 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.252498 | orchestrator | 06:19:53.252 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-19 06:19:53.252538 | orchestrator | 06:19:53.252 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.252561 | orchestrator | 06:19:53.252 STDOUT terraform:  + size = 20 2025-09-19 06:19:53.252584 | orchestrator | 06:19:53.252 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.252607 | orchestrator | 06:19:53.252 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.252622 | orchestrator | 06:19:53.252 STDOUT terraform:  } 2025-09-19 06:19:53.252664 | orchestrator | 06:19:53.252 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-19 06:19:53.252714 | orchestrator | 06:19:53.252 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:53.252748 | orchestrator | 06:19:53.252 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.252772 | orchestrator | 06:19:53.252 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.252807 | orchestrator | 06:19:53.252 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.252871 | orchestrator | 06:19:53.252 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.252889 | orchestrator | 06:19:53.252 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-19 06:19:53.252924 | orchestrator | 06:19:53.252 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.252946 | orchestrator | 06:19:53.252 STDOUT terraform:  + size = 20 2025-09-19 06:19:53.252976 | orchestrator | 06:19:53.252 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.252997 | orchestrator | 06:19:53.252 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.253004 | orchestrator | 06:19:53.252 STDOUT terraform:  } 2025-09-19 06:19:53.253049 | orchestrator | 06:19:53.253 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-19 06:19:53.253091 | orchestrator | 06:19:53.253 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:53.253125 | orchestrator | 06:19:53.253 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.253149 | orchestrator | 06:19:53.253 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.253199 | orchestrator | 06:19:53.253 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.253218 | orchestrator | 06:19:53.253 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.253256 | orchestrator | 06:19:53.253 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-19 06:19:53.253290 | orchestrator | 06:19:53.253 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.253309 | orchestrator | 06:19:53.253 STDOUT terraform:  + size = 20 2025-09-19 06:19:53.253332 | orchestrator | 06:19:53.253 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.253356 | orchestrator | 06:19:53.253 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.253362 | orchestrator | 06:19:53.253 STDOUT terraform:  } 2025-09-19 06:19:53.253408 | orchestrator | 06:19:53.253 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-19 06:19:53.253452 | orchestrator | 06:19:53.253 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:53.253485 | orchestrator | 06:19:53.253 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.253509 | orchestrator | 06:19:53.253 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.253543 | orchestrator | 06:19:53.253 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.253578 | orchestrator | 06:19:53.253 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.253616 | orchestrator | 06:19:53.253 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-19 06:19:53.253649 | orchestrator | 06:19:53.253 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.253669 | orchestrator | 06:19:53.253 STDOUT terraform:  + size = 20 2025-09-19 06:19:53.253692 | orchestrator | 06:19:53.253 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.253715 | orchestrator | 06:19:53.253 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.253721 | orchestrator | 06:19:53.253 STDOUT terraform:  } 2025-09-19 06:19:53.253769 | orchestrator | 06:19:53.253 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-19 06:19:53.253810 | orchestrator | 06:19:53.253 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:53.253857 | orchestrator | 06:19:53.253 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.253879 | orchestrator | 06:19:53.253 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.253913 | orchestrator | 06:19:53.253 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.253948 | orchestrator | 06:19:53.253 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.253985 | orchestrator | 06:19:53.253 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-19 06:19:53.254039 | orchestrator | 06:19:53.253 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.254050 | orchestrator | 06:19:53.254 STDOUT terraform:  + size = 20 2025-09-19 06:19:53.254071 | orchestrator | 06:19:53.254 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.254094 | orchestrator | 06:19:53.254 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.254100 | orchestrator | 06:19:53.254 STDOUT terraform:  } 2025-09-19 06:19:53.254145 | orchestrator | 06:19:53.254 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-19 06:19:53.254187 | orchestrator | 06:19:53.254 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:53.254221 | orchestrator | 06:19:53.254 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.254245 | orchestrator | 06:19:53.254 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.254280 | orchestrator | 06:19:53.254 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.254315 | orchestrator | 06:19:53.254 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.254353 | orchestrator | 06:19:53.254 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-19 06:19:53.254387 | orchestrator | 06:19:53.254 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.254407 | orchestrator | 06:19:53.254 STDOUT terraform:  + size = 20 2025-09-19 06:19:53.254431 | orchestrator | 06:19:53.254 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.254454 | orchestrator | 06:19:53.254 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.254461 | orchestrator | 06:19:53.254 STDOUT terraform:  } 2025-09-19 06:19:53.254588 | orchestrator | 06:19:53.254 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-19 06:19:53.254629 | orchestrator | 06:19:53.254 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:53.254662 | orchestrator | 06:19:53.254 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.254686 | orchestrator | 06:19:53.254 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.254724 | orchestrator | 06:19:53.254 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.254759 | orchestrator | 06:19:53.254 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.254797 | orchestrator | 06:19:53.254 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-19 06:19:53.254830 | orchestrator | 06:19:53.254 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.254881 | orchestrator | 06:19:53.254 STDOUT terraform:  + size = 20 2025-09-19 06:19:53.254892 | orchestrator | 06:19:53.254 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.254917 | orchestrator | 06:19:53.254 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.254927 | orchestrator | 06:19:53.254 STDOUT terraform:  } 2025-09-19 06:19:53.254970 | orchestrator | 06:19:53.254 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-19 06:19:53.255010 | orchestrator | 06:19:53.254 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 06:19:53.255044 | orchestrator | 06:19:53.255 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 06:19:53.255068 | orchestrator | 06:19:53.255 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.255101 | orchestrator | 06:19:53.255 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.255136 | orchestrator | 06:19:53.255 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 06:19:53.255175 | orchestrator | 06:19:53.255 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-19 06:19:53.255210 | orchestrator | 06:19:53.255 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.255231 | orchestrator | 06:19:53.255 STDOUT terraform:  + size = 20 2025-09-19 06:19:53.255255 | orchestrator | 06:19:53.255 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 06:19:53.255278 | orchestrator | 06:19:53.255 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 06:19:53.255284 | orchestrator | 06:19:53.255 STDOUT terraform:  } 2025-09-19 06:19:53.255336 | orchestrator | 06:19:53.255 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-19 06:19:53.255373 | orchestrator | 06:19:53.255 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-19 06:19:53.255406 | orchestrator | 06:19:53.255 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 06:19:53.255439 | orchestrator | 06:19:53.255 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 06:19:53.255472 | orchestrator | 06:19:53.255 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 06:19:53.255508 | orchestrator | 06:19:53.255 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.255531 | orchestrator | 06:19:53.255 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.255546 | orchestrator | 06:19:53.255 STDOUT terraform:  + config_drive = true 2025-09-19 06:19:53.255581 | orchestrator | 06:19:53.255 STDOUT terraform:  + created = (known after apply) 2025-09-19 06:19:53.255615 | orchestrator | 06:19:53.255 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 06:19:53.255646 | orchestrator | 06:19:53.255 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-19 06:19:53.255668 | orchestrator | 06:19:53.255 STDOUT terraform:  + force_delete = false 2025-09-19 06:19:53.255700 | orchestrator | 06:19:53.255 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 06:19:53.255733 | orchestrator | 06:19:53.255 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.255766 | orchestrator | 06:19:53.255 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:53.255799 | orchestrator | 06:19:53.255 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 06:19:53.255824 | orchestrator | 06:19:53.255 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 06:19:53.255868 | orchestrator | 06:19:53.255 STDOUT terraform:  + name = "testbed-manager" 2025-09-19 06:19:53.255891 | orchestrator | 06:19:53.255 STDOUT terraform:  + power_state = "active" 2025-09-19 06:19:53.255924 | orchestrator | 06:19:53.255 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.255958 | orchestrator | 06:19:53.255 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 06:19:53.255980 | orchestrator | 06:19:53.255 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 06:19:53.256014 | orchestrator | 06:19:53.255 STDOUT terraform:  + updated = (known after apply) 2025-09-19 06:19:53.256044 | orchestrator | 06:19:53.256 STDOUT terraform:  + user_data = (sensitive value) 2025-09-19 06:19:53.256062 | orchestrator | 06:19:53.256 STDOUT terraform:  + block_device { 2025-09-19 06:19:53.256085 | orchestrator | 06:19:53.256 STDOUT terraform:  + boot_index = 0 2025-09-19 06:19:53.256113 | orchestrator | 06:19:53.256 STDOUT terraform:  + delete_on_termination = false 2025-09-19 06:19:53.256141 | orchestrator | 06:19:53.256 STDOUT terraform:  + destination_type = "volume" 2025-09-19 06:19:53.256168 | orchestrator | 06:19:53.256 STDOUT terraform:  + multiattach = false 2025-09-19 06:19:53.256197 | orchestrator | 06:19:53.256 STDOUT terraform:  + source_type = "volume" 2025-09-19 06:19:53.256234 | orchestrator | 06:19:53.256 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:53.256249 | orchestrator | 06:19:53.256 STDOUT terraform:  } 2025-09-19 06:19:53.256263 | orchestrator | 06:19:53.256 STDOUT terraform:  + network { 2025-09-19 06:19:53.256283 | orchestrator | 06:19:53.256 STDOUT terraform:  + access_network = false 2025-09-19 06:19:53.256312 | orchestrator | 06:19:53.256 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 06:19:53.256341 | orchestrator | 06:19:53.256 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 06:19:53.256373 | orchestrator | 06:19:53.256 STDOUT terraform:  + mac = (known after apply) 2025-09-19 06:19:53.256403 | orchestrator | 06:19:53.256 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:53.256432 | orchestrator | 06:19:53.256 STDOUT terraform:  + port = (known after apply) 2025-09-19 06:19:53.256461 | orchestrator | 06:19:53.256 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:53.256475 | orchestrator | 06:19:53.256 STDOUT terraform:  } 2025-09-19 06:19:53.256482 | orchestrator | 06:19:53.256 STDOUT terraform:  } 2025-09-19 06:19:53.256525 | orchestrator | 06:19:53.256 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-19 06:19:53.256567 | orchestrator | 06:19:53.256 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 06:19:53.256600 | orchestrator | 06:19:53.256 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 06:19:53.256637 | orchestrator | 06:19:53.256 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 06:19:53.256667 | orchestrator | 06:19:53.256 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 06:19:53.256702 | orchestrator | 06:19:53.256 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.256724 | orchestrator | 06:19:53.256 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.256744 | orchestrator | 06:19:53.256 STDOUT terraform:  + config_drive = true 2025-09-19 06:19:53.256779 | orchestrator | 06:19:53.256 STDOUT terraform:  + created = (known after apply) 2025-09-19 06:19:53.256811 | orchestrator | 06:19:53.256 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 06:19:53.256839 | orchestrator | 06:19:53.256 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 06:19:53.256881 | orchestrator | 06:19:53.256 STDOUT terraform:  + force_delete = false 2025-09-19 06:19:53.256931 | orchestrator | 06:19:53.256 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 06:19:53.256986 | orchestrator | 06:19:53.256 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.257022 | orchestrator | 06:19:53.256 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:53.257056 | orchestrator | 06:19:53.257 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 06:19:53.257082 | orchestrator | 06:19:53.257 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 06:19:53.257111 | orchestrator | 06:19:53.257 STDOUT terraform:  + name = "testbed-node-0" 2025-09-19 06:19:53.257135 | orchestrator | 06:19:53.257 STDOUT terraform:  + power_state = "active" 2025-09-19 06:19:53.257169 | orchestrator | 06:19:53.257 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.257202 | orchestrator | 06:19:53.257 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 06:19:53.257224 | orchestrator | 06:19:53.257 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 06:19:53.257259 | orchestrator | 06:19:53.257 STDOUT terraform:  + updated = (known after apply) 2025-09-19 06:19:53.257310 | orchestrator | 06:19:53.257 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 06:19:53.257335 | orchestrator | 06:19:53.257 STDOUT terraform:  + block_device { 2025-09-19 06:19:53.257362 | orchestrator | 06:19:53.257 STDOUT terraform:  + boot_index = 0 2025-09-19 06:19:53.257389 | orchestrator | 06:19:53.257 STDOUT terraform:  + delete_on_termination = false 2025-09-19 06:19:53.257417 | orchestrator | 06:19:53.257 STDOUT terraform:  + destination_type = "volume" 2025-09-19 06:19:53.257444 | orchestrator | 06:19:53.257 STDOUT terraform:  + multiattach = false 2025-09-19 06:19:53.257472 | orchestrator | 06:19:53.257 STDOUT terraform:  + source_type = "volume" 2025-09-19 06:19:53.257508 | orchestrator | 06:19:53.257 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:53.257521 | orchestrator | 06:19:53.257 STDOUT terraform:  } 2025-09-19 06:19:53.257535 | orchestrator | 06:19:53.257 STDOUT terraform:  + network { 2025-09-19 06:19:53.257556 | orchestrator | 06:19:53.257 STDOUT terraform:  + access_network = false 2025-09-19 06:19:53.257585 | orchestrator | 06:19:53.257 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 06:19:53.257614 | orchestrator | 06:19:53.257 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 06:19:53.257646 | orchestrator | 06:19:53.257 STDOUT terraform:  + mac = (known after apply) 2025-09-19 06:19:53.257677 | orchestrator | 06:19:53.257 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:53.257707 | orchestrator | 06:19:53.257 STDOUT terraform:  + port = (known after apply) 2025-09-19 06:19:53.257737 | orchestrator | 06:19:53.257 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:53.257743 | orchestrator | 06:19:53.257 STDOUT terraform:  } 2025-09-19 06:19:53.257759 | orchestrator | 06:19:53.257 STDOUT terraform:  } 2025-09-19 06:19:53.257823 | orchestrator | 06:19:53.257 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-19 06:19:53.257868 | orchestrator | 06:19:53.257 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 06:19:53.257906 | orchestrator | 06:19:53.257 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 06:19:53.257924 | orchestrator | 06:19:53.257 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 06:19:53.257959 | orchestrator | 06:19:53.257 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 06:19:53.257993 | orchestrator | 06:19:53.257 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.258045 | orchestrator | 06:19:53.257 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.258105 | orchestrator | 06:19:53.258 STDOUT terraform:  + config_drive = true 2025-09-19 06:19:53.258142 | orchestrator | 06:19:53.258 STDOUT terraform:  + created = (known after apply) 2025-09-19 06:19:53.258177 | orchestrator | 06:19:53.258 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 06:19:53.258207 | orchestrator | 06:19:53.258 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 06:19:53.258255 | orchestrator | 06:19:53.258 STDOUT terraform:  + force_delete = false 2025-09-19 06:19:53.258294 | orchestrator | 06:19:53.258 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 06:19:53.258323 | orchestrator | 06:19:53.258 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.258357 | orchestrator | 06:19:53.258 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:53.258392 | orchestrator | 06:19:53.258 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 06:19:53.258418 | orchestrator | 06:19:53.258 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 06:19:53.258447 | orchestrator | 06:19:53.258 STDOUT terraform:  + name = "testbed-node-1" 2025-09-19 06:19:53.258474 | orchestrator | 06:19:53.258 STDOUT terraform:  + power_state = "active" 2025-09-19 06:19:53.258507 | orchestrator | 06:19:53.258 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.258541 | orchestrator | 06:19:53.258 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 06:19:53.258564 | orchestrator | 06:19:53.258 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 06:19:53.258598 | orchestrator | 06:19:53.258 STDOUT terraform:  + updated = (known after apply) 2025-09-19 06:19:53.258648 | orchestrator | 06:19:53.258 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 06:19:53.258677 | orchestrator | 06:19:53.258 STDOUT terraform:  + block_device { 2025-09-19 06:19:53.258684 | orchestrator | 06:19:53.258 STDOUT terraform:  + boot_index = 0 2025-09-19 06:19:53.258715 | orchestrator | 06:19:53.258 STDOUT terraform:  + delete_on_termination = false 2025-09-19 06:19:53.258751 | orchestrator | 06:19:53.258 STDOUT terraform:  + destination_type = "volume" 2025-09-19 06:19:53.258768 | orchestrator | 06:19:53.258 STDOUT terraform:  + multiattach = false 2025-09-19 06:19:53.258797 | orchestrator | 06:19:53.258 STDOUT terraform:  + source_type = "volume" 2025-09-19 06:19:53.258835 | orchestrator | 06:19:53.258 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:53.258852 | orchestrator | 06:19:53.258 STDOUT terraform:  } 2025-09-19 06:19:53.258879 | orchestrator | 06:19:53.258 STDOUT terraform:  + network { 2025-09-19 06:19:53.258899 | orchestrator | 06:19:53.258 STDOUT terraform:  + access_network = false 2025-09-19 06:19:53.258931 | orchestrator | 06:19:53.258 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 06:19:53.258960 | orchestrator | 06:19:53.258 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 06:19:53.258992 | orchestrator | 06:19:53.258 STDOUT terraform:  + mac = (known after apply) 2025-09-19 06:19:53.259023 | orchestrator | 06:19:53.258 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:53.259054 | orchestrator | 06:19:53.259 STDOUT terraform:  + port = (known after apply) 2025-09-19 06:19:53.259083 | orchestrator | 06:19:53.259 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:53.259090 | orchestrator | 06:19:53.259 STDOUT terraform:  } 2025-09-19 06:19:53.259106 | orchestrator | 06:19:53.259 STDOUT terraform:  } 2025-09-19 06:19:53.259149 | orchestrator | 06:19:53.259 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-19 06:19:53.259190 | orchestrator | 06:19:53.259 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 06:19:53.259225 | orchestrator | 06:19:53.259 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 06:19:53.259257 | orchestrator | 06:19:53.259 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 06:19:53.259298 | orchestrator | 06:19:53.259 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 06:19:53.259325 | orchestrator | 06:19:53.259 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.259351 | orchestrator | 06:19:53.259 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.259373 | orchestrator | 06:19:53.259 STDOUT terraform:  + config_drive = true 2025-09-19 06:19:53.259406 | orchestrator | 06:19:53.259 STDOUT terraform:  + created = (known after apply) 2025-09-19 06:19:53.259440 | orchestrator | 06:19:53.259 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 06:19:53.259470 | orchestrator | 06:19:53.259 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 06:19:53.259492 | orchestrator | 06:19:53.259 STDOUT terraform:  + force_delete = false 2025-09-19 06:19:53.259525 | orchestrator | 06:19:53.259 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 06:19:53.259560 | orchestrator | 06:19:53.259 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.259594 | orchestrator | 06:19:53.259 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:53.259627 | orchestrator | 06:19:53.259 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 06:19:53.259650 | orchestrator | 06:19:53.259 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 06:19:53.259680 | orchestrator | 06:19:53.259 STDOUT terraform:  + name = "testbed-node-2" 2025-09-19 06:19:53.259703 | orchestrator | 06:19:53.259 STDOUT terraform:  + power_state = "active" 2025-09-19 06:19:53.259737 | orchestrator | 06:19:53.259 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.259769 | orchestrator | 06:19:53.259 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 06:19:53.259792 | orchestrator | 06:19:53.259 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 06:19:53.259827 | orchestrator | 06:19:53.259 STDOUT terraform:  + updated = (known after apply) 2025-09-19 06:19:53.259884 | orchestrator | 06:19:53.259 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 06:19:53.259907 | orchestrator | 06:19:53.259 STDOUT terraform:  + block_device { 2025-09-19 06:19:53.259925 | orchestrator | 06:19:53.259 STDOUT terraform:  + boot_index = 0 2025-09-19 06:19:53.259951 | orchestrator | 06:19:53.259 STDOUT terraform:  + delete_on_termination = false 2025-09-19 06:19:53.259986 | orchestrator | 06:19:53.259 STDOUT terraform:  + destination_type = "volume" 2025-09-19 06:19:53.260007 | orchestrator | 06:19:53.259 STDOUT terraform:  + multiattach = false 2025-09-19 06:19:53.260035 | orchestrator | 06:19:53.260 STDOUT terraform:  + source_type = "volume" 2025-09-19 06:19:53.260072 | orchestrator | 06:19:53.260 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:53.260082 | orchestrator | 06:19:53.260 STDOUT terraform:  } 2025-09-19 06:19:53.260089 | orchestrator | 06:19:53.260 STDOUT terraform:  + network { 2025-09-19 06:19:53.260111 | orchestrator | 06:19:53.260 STDOUT terraform:  + access_network = false 2025-09-19 06:19:53.260142 | orchestrator | 06:19:53.260 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 06:19:53.260170 | orchestrator | 06:19:53.260 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 06:19:53.260202 | orchestrator | 06:19:53.260 STDOUT terraform:  + mac = (known after apply) 2025-09-19 06:19:53.260231 | orchestrator | 06:19:53.260 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:53.260261 | orchestrator | 06:19:53.260 STDOUT terraform:  + port = (known after apply) 2025-09-19 06:19:53.260291 | orchestrator | 06:19:53.260 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:53.260298 | orchestrator | 06:19:53.260 STDOUT terraform:  } 2025-09-19 06:19:53.260314 | orchestrator | 06:19:53.260 STDOUT terraform:  } 2025-09-19 06:19:53.260355 | orchestrator | 06:19:53.260 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-19 06:19:53.260396 | orchestrator | 06:19:53.260 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 06:19:53.260429 | orchestrator | 06:19:53.260 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 06:19:53.260462 | orchestrator | 06:19:53.260 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 06:19:53.260496 | orchestrator | 06:19:53.260 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 06:19:53.260530 | orchestrator | 06:19:53.260 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.260553 | orchestrator | 06:19:53.260 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.260584 | orchestrator | 06:19:53.260 STDOUT terraform:  + config_drive = true 2025-09-19 06:19:53.260608 | orchestrator | 06:19:53.260 STDOUT terraform:  + created = (known after apply) 2025-09-19 06:19:53.260652 | orchestrator | 06:19:53.260 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 06:19:53.260671 | orchestrator | 06:19:53.260 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 06:19:53.260694 | orchestrator | 06:19:53.260 STDOUT terraform:  + force_delete = false 2025-09-19 06:19:53.260728 | orchestrator | 06:19:53.260 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 06:19:53.260761 | orchestrator | 06:19:53.260 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.260795 | orchestrator | 06:19:53.260 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:53.260829 | orchestrator | 06:19:53.260 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 06:19:53.260876 | orchestrator | 06:19:53.260 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 06:19:53.260892 | orchestrator | 06:19:53.260 STDOUT terraform:  + name = "testbed-node-3" 2025-09-19 06:19:53.260915 | orchestrator | 06:19:53.260 STDOUT terraform:  + power_state = "active" 2025-09-19 06:19:53.260949 | orchestrator | 06:19:53.260 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.260984 | orchestrator | 06:19:53.260 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 06:19:53.261006 | orchestrator | 06:19:53.260 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 06:19:53.261041 | orchestrator | 06:19:53.261 STDOUT terraform:  + updated = (known after apply) 2025-09-19 06:19:53.261088 | orchestrator | 06:19:53.261 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 06:19:53.261104 | orchestrator | 06:19:53.261 STDOUT terraform:  + block_device { 2025-09-19 06:19:53.261127 | orchestrator | 06:19:53.261 STDOUT terraform:  + boot_index = 0 2025-09-19 06:19:53.261153 | orchestrator | 06:19:53.261 STDOUT terraform:  + delete_on_termination = false 2025-09-19 06:19:53.261182 | orchestrator | 06:19:53.261 STDOUT terraform:  + destination_type = "volume" 2025-09-19 06:19:53.261208 | orchestrator | 06:19:53.261 STDOUT terraform:  + multiattach = false 2025-09-19 06:19:53.261238 | orchestrator | 06:19:53.261 STDOUT terraform:  + source_type = "volume" 2025-09-19 06:19:53.261276 | orchestrator | 06:19:53.261 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:53.261282 | orchestrator | 06:19:53.261 STDOUT terraform:  } 2025-09-19 06:19:53.261298 | orchestrator | 06:19:53.261 STDOUT terraform:  + network { 2025-09-19 06:19:53.261327 | orchestrator | 06:19:53.261 STDOUT terraform:  + access_network = false 2025-09-19 06:19:53.261347 | orchestrator | 06:19:53.261 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 06:19:53.261377 | orchestrator | 06:19:53.261 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 06:19:53.261408 | orchestrator | 06:19:53.261 STDOUT terraform:  + mac = (known after apply) 2025-09-19 06:19:53.261437 | orchestrator | 06:19:53.261 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:53.261467 | orchestrator | 06:19:53.261 STDOUT terraform:  + port = (known after apply) 2025-09-19 06:19:53.261497 | orchestrator | 06:19:53.261 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:53.261504 | orchestrator | 06:19:53.261 STDOUT terraform:  } 2025-09-19 06:19:53.261520 | orchestrator | 06:19:53.261 STDOUT terraform:  } 2025-09-19 06:19:53.261563 | orchestrator | 06:19:53.261 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-19 06:19:53.261603 | orchestrator | 06:19:53.261 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 06:19:53.261636 | orchestrator | 06:19:53.261 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 06:19:53.261669 | orchestrator | 06:19:53.261 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 06:19:53.261703 | orchestrator | 06:19:53.261 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 06:19:53.261737 | orchestrator | 06:19:53.261 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.261761 | orchestrator | 06:19:53.261 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.261782 | orchestrator | 06:19:53.261 STDOUT terraform:  + config_drive = true 2025-09-19 06:19:53.261821 | orchestrator | 06:19:53.261 STDOUT terraform:  + created = (known after apply) 2025-09-19 06:19:53.261867 | orchestrator | 06:19:53.261 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 06:19:53.261895 | orchestrator | 06:19:53.261 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 06:19:53.261928 | orchestrator | 06:19:53.261 STDOUT terraform:  + force_delete = false 2025-09-19 06:19:53.261950 | orchestrator | 06:19:53.261 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 06:19:53.261984 | orchestrator | 06:19:53.261 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.262045 | orchestrator | 06:19:53.261 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:53.262067 | orchestrator | 06:19:53.262 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 06:19:53.262087 | orchestrator | 06:19:53.262 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 06:19:53.262122 | orchestrator | 06:19:53.262 STDOUT terraform:  + name = "testbed-node-4" 2025-09-19 06:19:53.262141 | orchestrator | 06:19:53.262 STDOUT terraform:  + power_state = "active" 2025-09-19 06:19:53.262175 | orchestrator | 06:19:53.262 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.262207 | orchestrator | 06:19:53.262 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 06:19:53.262229 | orchestrator | 06:19:53.262 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 06:19:53.262265 | orchestrator | 06:19:53.262 STDOUT terraform:  + updated = (known after apply) 2025-09-19 06:19:53.262313 | orchestrator | 06:19:53.262 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 06:19:53.262331 | orchestrator | 06:19:53.262 STDOUT terraform:  + block_device { 2025-09-19 06:19:53.262354 | orchestrator | 06:19:53.262 STDOUT terraform:  + boot_index = 0 2025-09-19 06:19:53.262380 | orchestrator | 06:19:53.262 STDOUT terraform:  + delete_on_termination = false 2025-09-19 06:19:53.262409 | orchestrator | 06:19:53.262 STDOUT terraform:  + destination_type = "volume" 2025-09-19 06:19:53.262436 | orchestrator | 06:19:53.262 STDOUT terraform:  + multiattach = false 2025-09-19 06:19:53.262464 | orchestrator | 06:19:53.262 STDOUT terraform:  + source_type = "volume" 2025-09-19 06:19:53.262501 | orchestrator | 06:19:53.262 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:53.262523 | orchestrator | 06:19:53.262 STDOUT terraform:  } 2025-09-19 06:19:53.262528 | orchestrator | 06:19:53.262 STDOUT terraform:  + network { 2025-09-19 06:19:53.262544 | orchestrator | 06:19:53.262 STDOUT terraform:  + access_network = false 2025-09-19 06:19:53.262575 | orchestrator | 06:19:53.262 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 06:19:53.262605 | orchestrator | 06:19:53.262 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 06:19:53.262636 | orchestrator | 06:19:53.262 STDOUT terraform:  + mac = (known after apply) 2025-09-19 06:19:53.262667 | orchestrator | 06:19:53.262 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:53.262697 | orchestrator | 06:19:53.262 STDOUT terraform:  + port = (known after apply) 2025-09-19 06:19:53.262729 | orchestrator | 06:19:53.262 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:53.262743 | orchestrator | 06:19:53.262 STDOUT terraform:  } 2025-09-19 06:19:53.262756 | orchestrator | 06:19:53.262 STDOUT terraform:  } 2025-09-19 06:19:53.262798 | orchestrator | 06:19:53.262 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-19 06:19:53.262839 | orchestrator | 06:19:53.262 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 06:19:53.262877 | orchestrator | 06:19:53.262 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 06:19:53.262913 | orchestrator | 06:19:53.262 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 06:19:53.262952 | orchestrator | 06:19:53.262 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 06:19:53.262985 | orchestrator | 06:19:53.262 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.263018 | orchestrator | 06:19:53.262 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 06:19:53.263025 | orchestrator | 06:19:53.263 STDOUT terraform:  + config_drive = true 2025-09-19 06:19:53.263059 | orchestrator | 06:19:53.263 STDOUT terraform:  + created = (known after apply) 2025-09-19 06:19:53.263094 | orchestrator | 06:19:53.263 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 06:19:53.263121 | orchestrator | 06:19:53.263 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 06:19:53.263144 | orchestrator | 06:19:53.263 STDOUT terraform:  + force_delete = false 2025-09-19 06:19:53.263177 | orchestrator | 06:19:53.263 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 06:19:53.263212 | orchestrator | 06:19:53.263 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.263246 | orchestrator | 06:19:53.263 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 06:19:53.263281 | orchestrator | 06:19:53.263 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 06:19:53.263305 | orchestrator | 06:19:53.263 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 06:19:53.263335 | orchestrator | 06:19:53.263 STDOUT terraform:  + name = "testbed-node-5" 2025-09-19 06:19:53.263359 | orchestrator | 06:19:53.263 STDOUT terraform:  + power_state = "active" 2025-09-19 06:19:53.263393 | orchestrator | 06:19:53.263 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.263426 | orchestrator | 06:19:53.263 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 06:19:53.263449 | orchestrator | 06:19:53.263 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 06:19:53.263483 | orchestrator | 06:19:53.263 STDOUT terraform:  + updated = (known after apply) 2025-09-19 06:19:53.263531 | orchestrator | 06:19:53.263 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 06:19:53.263547 | orchestrator | 06:19:53.263 STDOUT terraform:  + block_device { 2025-09-19 06:19:53.263571 | orchestrator | 06:19:53.263 STDOUT terraform:  + boot_index = 0 2025-09-19 06:19:53.263598 | orchestrator | 06:19:53.263 STDOUT terraform:  + delete_on_termination = false 2025-09-19 06:19:53.263626 | orchestrator | 06:19:53.263 STDOUT terraform:  + destination_type = "volume" 2025-09-19 06:19:53.263653 | orchestrator | 06:19:53.263 STDOUT terraform:  + multiattach = false 2025-09-19 06:19:53.263681 | orchestrator | 06:19:53.263 STDOUT terraform:  + source_type = "volume" 2025-09-19 06:19:53.263718 | orchestrator | 06:19:53.263 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:53.263732 | orchestrator | 06:19:53.263 STDOUT terraform:  } 2025-09-19 06:19:53.263746 | orchestrator | 06:19:53.263 STDOUT terraform:  + network { 2025-09-19 06:19:53.263767 | orchestrator | 06:19:53.263 STDOUT terraform:  + access_network = false 2025-09-19 06:19:53.263801 | orchestrator | 06:19:53.263 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 06:19:53.263827 | orchestrator | 06:19:53.263 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 06:19:53.263868 | orchestrator | 06:19:53.263 STDOUT terraform:  + mac = (known after apply) 2025-09-19 06:19:53.263897 | orchestrator | 06:19:53.263 STDOUT terraform:  + name = (known after apply) 2025-09-19 06:19:53.263931 | orchestrator | 06:19:53.263 STDOUT terraform:  + port = (known after apply) 2025-09-19 06:19:53.263958 | orchestrator | 06:19:53.263 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 06:19:53.263965 | orchestrator | 06:19:53.263 STDOUT terraform:  } 2025-09-19 06:19:53.263980 | orchestrator | 06:19:53.263 STDOUT terraform:  } 2025-09-19 06:19:53.264014 | orchestrator | 06:19:53.263 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-19 06:19:53.264047 | orchestrator | 06:19:53.264 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-19 06:19:53.264075 | orchestrator | 06:19:53.264 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-19 06:19:53.264103 | orchestrator | 06:19:53.264 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.264123 | orchestrator | 06:19:53.264 STDOUT terraform:  + name = "testbed" 2025-09-19 06:19:53.264148 | orchestrator | 06:19:53.264 STDOUT terraform:  + private_key = (sensitive value) 2025-09-19 06:19:53.264174 | orchestrator | 06:19:53.264 STDOUT terraform:  + public_key = (known after apply) 2025-09-19 06:19:53.264202 | orchestrator | 06:19:53.264 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.264232 | orchestrator | 06:19:53.264 STDOUT terraform:  + user_id = (known after apply) 2025-09-19 06:19:53.264238 | orchestrator | 06:19:53.264 STDOUT terraform:  } 2025-09-19 06:19:53.264288 | orchestrator | 06:19:53.264 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-19 06:19:53.264336 | orchestrator | 06:19:53.264 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:53.264363 | orchestrator | 06:19:53.264 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:53.264390 | orchestrator | 06:19:53.264 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.264416 | orchestrator | 06:19:53.264 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:53.264445 | orchestrator | 06:19:53.264 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.264472 | orchestrator | 06:19:53.264 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:53.264486 | orchestrator | 06:19:53.264 STDOUT terraform:  } 2025-09-19 06:19:53.264533 | orchestrator | 06:19:53.264 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-19 06:19:53.264581 | orchestrator | 06:19:53.264 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:53.264607 | orchestrator | 06:19:53.264 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:53.264637 | orchestrator | 06:19:53.264 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.264664 | orchestrator | 06:19:53.264 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:53.264699 | orchestrator | 06:19:53.264 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.264719 | orchestrator | 06:19:53.264 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:53.264725 | orchestrator | 06:19:53.264 STDOUT terraform:  } 2025-09-19 06:19:53.264778 | orchestrator | 06:19:53.264 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-19 06:19:53.264825 | orchestrator | 06:19:53.264 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:53.264861 | orchestrator | 06:19:53.264 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:53.264889 | orchestrator | 06:19:53.264 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.264917 | orchestrator | 06:19:53.264 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:53.264944 | orchestrator | 06:19:53.264 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.264972 | orchestrator | 06:19:53.264 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:53.264978 | orchestrator | 06:19:53.264 STDOUT terraform:  } 2025-09-19 06:19:53.265028 | orchestrator | 06:19:53.264 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-19 06:19:53.265075 | orchestrator | 06:19:53.265 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:53.265102 | orchestrator | 06:19:53.265 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:53.265129 | orchestrator | 06:19:53.265 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.265155 | orchestrator | 06:19:53.265 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:53.265183 | orchestrator | 06:19:53.265 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.265210 | orchestrator | 06:19:53.265 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:53.265217 | orchestrator | 06:19:53.265 STDOUT terraform:  } 2025-09-19 06:19:53.265267 | orchestrator | 06:19:53.265 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-19 06:19:53.265315 | orchestrator | 06:19:53.265 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:53.265342 | orchestrator | 06:19:53.265 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:53.265369 | orchestrator | 06:19:53.265 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.265397 | orchestrator | 06:19:53.265 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:53.265424 | orchestrator | 06:19:53.265 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.265451 | orchestrator | 06:19:53.265 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:53.265457 | orchestrator | 06:19:53.265 STDOUT terraform:  } 2025-09-19 06:19:53.265528 | orchestrator | 06:19:53.265 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-19 06:19:53.265578 | orchestrator | 06:19:53.265 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:53.265606 | orchestrator | 06:19:53.265 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:53.265634 | orchestrator | 06:19:53.265 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.265661 | orchestrator | 06:19:53.265 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:53.265688 | orchestrator | 06:19:53.265 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.265716 | orchestrator | 06:19:53.265 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:53.265722 | orchestrator | 06:19:53.265 STDOUT terraform:  } 2025-09-19 06:19:53.265773 | orchestrator | 06:19:53.265 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-19 06:19:53.265820 | orchestrator | 06:19:53.265 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:53.265876 | orchestrator | 06:19:53.265 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:53.265895 | orchestrator | 06:19:53.265 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.265923 | orchestrator | 06:19:53.265 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:53.265949 | orchestrator | 06:19:53.265 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.265976 | orchestrator | 06:19:53.265 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:53.265982 | orchestrator | 06:19:53.265 STDOUT terraform:  } 2025-09-19 06:19:53.266046 | orchestrator | 06:19:53.265 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-19 06:19:53.266091 | orchestrator | 06:19:53.266 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:53.266119 | orchestrator | 06:19:53.266 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:53.266146 | orchestrator | 06:19:53.266 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.266174 | orchestrator | 06:19:53.266 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:53.266204 | orchestrator | 06:19:53.266 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.266231 | orchestrator | 06:19:53.266 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:53.266237 | orchestrator | 06:19:53.266 STDOUT terraform:  } 2025-09-19 06:19:53.266289 | orchestrator | 06:19:53.266 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-19 06:19:53.266360 | orchestrator | 06:19:53.266 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 06:19:53.266372 | orchestrator | 06:19:53.266 STDOUT terraform:  + device = (known after apply) 2025-09-19 06:19:53.266386 | orchestrator | 06:19:53.266 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.266414 | orchestrator | 06:19:53.266 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 06:19:53.266441 | orchestrator | 06:19:53.266 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.266468 | orchestrator | 06:19:53.266 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 06:19:53.266474 | orchestrator | 06:19:53.266 STDOUT terraform:  } 2025-09-19 06:19:53.266536 | orchestrator | 06:19:53.266 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-19 06:19:53.266591 | orchestrator | 06:19:53.266 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-19 06:19:53.266619 | orchestrator | 06:19:53.266 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-19 06:19:53.266646 | orchestrator | 06:19:53.266 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-19 06:19:53.266674 | orchestrator | 06:19:53.266 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.266700 | orchestrator | 06:19:53.266 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 06:19:53.266731 | orchestrator | 06:19:53.266 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.266737 | orchestrator | 06:19:53.266 STDOUT terraform:  } 2025-09-19 06:19:53.266785 | orchestrator | 06:19:53.266 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-19 06:19:53.266859 | orchestrator | 06:19:53.266 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-19 06:19:53.266894 | orchestrator | 06:19:53.266 STDOUT terraform:  + address = (known after apply) 2025-09-19 06:19:53.266917 | orchestrator | 06:19:53.266 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.266942 | orchestrator | 06:19:53.266 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-19 06:19:53.266967 | orchestrator | 06:19:53.266 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:53.266991 | orchestrator | 06:19:53.266 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-19 06:19:53.267017 | orchestrator | 06:19:53.266 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.267040 | orchestrator | 06:19:53.267 STDOUT terraform:  + pool = "public" 2025-09-19 06:19:53.267064 | orchestrator | 06:19:53.267 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 06:19:53.267087 | orchestrator | 06:19:53.267 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.267111 | orchestrator | 06:19:53.267 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:53.267135 | orchestrator | 06:19:53.267 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.267148 | orchestrator | 06:19:53.267 STDOUT terraform:  } 2025-09-19 06:19:53.267191 | orchestrator | 06:19:53.267 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-19 06:19:53.267241 | orchestrator | 06:19:53.267 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-19 06:19:53.267278 | orchestrator | 06:19:53.267 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:53.267313 | orchestrator | 06:19:53.267 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.267335 | orchestrator | 06:19:53.267 STDOUT terraform:  + availability_zone_hints = [ 2025-09-19 06:19:53.267348 | orchestrator | 06:19:53.267 STDOUT terraform:  + "nova", 2025-09-19 06:19:53.267362 | orchestrator | 06:19:53.267 STDOUT terraform:  ] 2025-09-19 06:19:53.267398 | orchestrator | 06:19:53.267 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-19 06:19:53.267433 | orchestrator | 06:19:53.267 STDOUT terraform:  + external = (known after apply) 2025-09-19 06:19:53.267471 | orchestrator | 06:19:53.267 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.267506 | orchestrator | 06:19:53.267 STDOUT terraform:  + mtu = (known after apply) 2025-09-19 06:19:53.267544 | orchestrator | 06:19:53.267 STDOUT terraform:  + name = "net-testbed-management" 2025-09-19 06:19:53.267580 | orchestrator | 06:19:53.267 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:53.267614 | orchestrator | 06:19:53.267 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:53.267649 | orchestrator | 06:19:53.267 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.267689 | orchestrator | 06:19:53.267 STDOUT terraform:  + shared = (known after apply) 2025-09-19 06:19:53.267722 | orchestrator | 06:19:53.267 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.267756 | orchestrator | 06:19:53.267 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-19 06:19:53.267781 | orchestrator | 06:19:53.267 STDOUT terraform:  + segments (known after apply) 2025-09-19 06:19:53.267787 | orchestrator | 06:19:53.267 STDOUT terraform:  } 2025-09-19 06:19:53.267834 | orchestrator | 06:19:53.267 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-19 06:19:53.267888 | orchestrator | 06:19:53.267 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-19 06:19:53.267923 | orchestrator | 06:19:53.267 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:53.267959 | orchestrator | 06:19:53.267 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 06:19:53.267992 | orchestrator | 06:19:53.267 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 06:19:53.268028 | orchestrator | 06:19:53.267 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.268063 | orchestrator | 06:19:53.268 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 06:19:53.268098 | orchestrator | 06:19:53.268 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 06:19:53.268132 | orchestrator | 06:19:53.268 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 06:19:53.268166 | orchestrator | 06:19:53.268 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:53.268204 | orchestrator | 06:19:53.268 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.268239 | orchestrator | 06:19:53.268 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 06:19:53.268274 | orchestrator | 06:19:53.268 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:53.268308 | orchestrator | 06:19:53.268 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:53.268342 | orchestrator | 06:19:53.268 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:53.268377 | orchestrator | 06:19:53.268 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.268413 | orchestrator | 06:19:53.268 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 06:19:53.268448 | orchestrator | 06:19:53.268 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.268468 | orchestrator | 06:19:53.268 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.268496 | orchestrator | 06:19:53.268 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 06:19:53.268503 | orchestrator | 06:19:53.268 STDOUT terraform:  } 2025-09-19 06:19:53.268527 | orchestrator | 06:19:53.268 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.268554 | orchestrator | 06:19:53.268 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 06:19:53.268561 | orchestrator | 06:19:53.268 STDOUT terraform:  } 2025-09-19 06:19:53.268588 | orchestrator | 06:19:53.268 STDOUT terraform:  + binding (known after apply) 2025-09-19 06:19:53.268594 | orchestrator | 06:19:53.268 STDOUT terraform:  + fixed_ip { 2025-09-19 06:19:53.268622 | orchestrator | 06:19:53.268 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-19 06:19:53.268650 | orchestrator | 06:19:53.268 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:53.268665 | orchestrator | 06:19:53.268 STDOUT terraform:  } 2025-09-19 06:19:53.268670 | orchestrator | 06:19:53.268 STDOUT terraform:  } 2025-09-19 06:19:53.268718 | orchestrator | 06:19:53.268 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-19 06:19:53.268762 | orchestrator | 06:19:53.268 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 06:19:53.268797 | orchestrator | 06:19:53.268 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:53.268833 | orchestrator | 06:19:53.268 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 06:19:53.268876 | orchestrator | 06:19:53.268 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 06:19:53.268910 | orchestrator | 06:19:53.268 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.268945 | orchestrator | 06:19:53.268 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 06:19:53.268979 | orchestrator | 06:19:53.268 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 06:19:53.269016 | orchestrator | 06:19:53.268 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 06:19:53.269051 | orchestrator | 06:19:53.269 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:53.269087 | orchestrator | 06:19:53.269 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.269127 | orchestrator | 06:19:53.269 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 06:19:53.269162 | orchestrator | 06:19:53.269 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:53.269195 | orchestrator | 06:19:53.269 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:53.269230 | orchestrator | 06:19:53.269 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:53.269265 | orchestrator | 06:19:53.269 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.269301 | orchestrator | 06:19:53.269 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 06:19:53.269336 | orchestrator | 06:19:53.269 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.269364 | orchestrator | 06:19:53.269 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.269392 | orchestrator | 06:19:53.269 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 06:19:53.269399 | orchestrator | 06:19:53.269 STDOUT terraform:  } 2025-09-19 06:19:53.269422 | orchestrator | 06:19:53.269 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.269451 | orchestrator | 06:19:53.269 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 06:19:53.269457 | orchestrator | 06:19:53.269 STDOUT terraform:  } 2025-09-19 06:19:53.269480 | orchestrator | 06:19:53.269 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.269508 | orchestrator | 06:19:53.269 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 06:19:53.269515 | orchestrator | 06:19:53.269 STDOUT terraform:  } 2025-09-19 06:19:53.269536 | orchestrator | 06:19:53.269 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.269564 | orchestrator | 06:19:53.269 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 06:19:53.269571 | orchestrator | 06:19:53.269 STDOUT terraform:  } 2025-09-19 06:19:53.269598 | orchestrator | 06:19:53.269 STDOUT terraform:  + binding (known after apply) 2025-09-19 06:19:53.269613 | orchestrator | 06:19:53.269 STDOUT terraform:  + fixed_ip { 2025-09-19 06:19:53.269638 | orchestrator | 06:19:53.269 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-19 06:19:53.269668 | orchestrator | 06:19:53.269 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:53.269675 | orchestrator | 06:19:53.269 STDOUT terraform:  } 2025-09-19 06:19:53.269690 | orchestrator | 06:19:53.269 STDOUT terraform:  } 2025-09-19 06:19:53.269736 | orchestrator | 06:19:53.269 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-19 06:19:53.269781 | orchestrator | 06:19:53.269 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 06:19:53.269817 | orchestrator | 06:19:53.269 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:53.269871 | orchestrator | 06:19:53.269 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 06:19:53.269906 | orchestrator | 06:19:53.269 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 06:19:53.269949 | orchestrator | 06:19:53.269 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.269977 | orchestrator | 06:19:53.269 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 06:19:53.270037 | orchestrator | 06:19:53.269 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 06:19:53.270075 | orchestrator | 06:19:53.270 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 06:19:53.270120 | orchestrator | 06:19:53.270 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:53.270155 | orchestrator | 06:19:53.270 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.270190 | orchestrator | 06:19:53.270 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 06:19:53.270226 | orchestrator | 06:19:53.270 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:53.270262 | orchestrator | 06:19:53.270 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:53.270296 | orchestrator | 06:19:53.270 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:53.270332 | orchestrator | 06:19:53.270 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.270367 | orchestrator | 06:19:53.270 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 06:19:53.270402 | orchestrator | 06:19:53.270 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.270421 | orchestrator | 06:19:53.270 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.270449 | orchestrator | 06:19:53.270 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 06:19:53.270456 | orchestrator | 06:19:53.270 STDOUT terraform:  } 2025-09-19 06:19:53.270479 | orchestrator | 06:19:53.270 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.270506 | orchestrator | 06:19:53.270 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 06:19:53.270513 | orchestrator | 06:19:53.270 STDOUT terraform:  } 2025-09-19 06:19:53.270535 | orchestrator | 06:19:53.270 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.270563 | orchestrator | 06:19:53.270 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 06:19:53.270569 | orchestrator | 06:19:53.270 STDOUT terraform:  } 2025-09-19 06:19:53.270592 | orchestrator | 06:19:53.270 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.270621 | orchestrator | 06:19:53.270 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 06:19:53.270627 | orchestrator | 06:19:53.270 STDOUT terraform:  } 2025-09-19 06:19:53.270655 | orchestrator | 06:19:53.270 STDOUT terraform:  + binding (known after apply) 2025-09-19 06:19:53.270669 | orchestrator | 06:19:53.270 STDOUT terraform:  + fixed_ip { 2025-09-19 06:19:53.270693 | orchestrator | 06:19:53.270 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-19 06:19:53.270722 | orchestrator | 06:19:53.270 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:53.270728 | orchestrator | 06:19:53.270 STDOUT terraform:  } 2025-09-19 06:19:53.270744 | orchestrator | 06:19:53.270 STDOUT terraform:  } 2025-09-19 06:19:53.270789 | orchestrator | 06:19:53.270 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-19 06:19:53.270833 | orchestrator | 06:19:53.270 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 06:19:53.270878 | orchestrator | 06:19:53.270 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:53.270913 | orchestrator | 06:19:53.270 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 06:19:53.270947 | orchestrator | 06:19:53.270 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 06:19:53.270981 | orchestrator | 06:19:53.270 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.271016 | orchestrator | 06:19:53.270 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 06:19:53.271054 | orchestrator | 06:19:53.271 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 06:19:53.271085 | orchestrator | 06:19:53.271 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 06:19:53.271122 | orchestrator | 06:19:53.271 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:53.271157 | orchestrator | 06:19:53.271 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.271191 | orchestrator | 06:19:53.271 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 06:19:53.271226 | orchestrator | 06:19:53.271 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:53.271264 | orchestrator | 06:19:53.271 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:53.271296 | orchestrator | 06:19:53.271 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:53.271330 | orchestrator | 06:19:53.271 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.271364 | orchestrator | 06:19:53.271 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 06:19:53.271399 | orchestrator | 06:19:53.271 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.271418 | orchestrator | 06:19:53.271 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.271446 | orchestrator | 06:19:53.271 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 06:19:53.271460 | orchestrator | 06:19:53.271 STDOUT terraform:  } 2025-09-19 06:19:53.271481 | orchestrator | 06:19:53.271 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.271509 | orchestrator | 06:19:53.271 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 06:19:53.271515 | orchestrator | 06:19:53.271 STDOUT terraform:  } 2025-09-19 06:19:53.271538 | orchestrator | 06:19:53.271 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.271564 | orchestrator | 06:19:53.271 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 06:19:53.271577 | orchestrator | 06:19:53.271 STDOUT terraform:  } 2025-09-19 06:19:53.271592 | orchestrator | 06:19:53.271 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.271620 | orchestrator | 06:19:53.271 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 06:19:53.271633 | orchestrator | 06:19:53.271 STDOUT terraform:  } 2025-09-19 06:19:53.271656 | orchestrator | 06:19:53.271 STDOUT terraform:  + binding (known after apply) 2025-09-19 06:19:53.271669 | orchestrator | 06:19:53.271 STDOUT terraform:  + fixed_ip { 2025-09-19 06:19:53.271693 | orchestrator | 06:19:53.271 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-19 06:19:53.271722 | orchestrator | 06:19:53.271 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:53.271736 | orchestrator | 06:19:53.271 STDOUT terraform:  } 2025-09-19 06:19:53.271749 | orchestrator | 06:19:53.271 STDOUT terraform:  } 2025-09-19 06:19:53.271794 | orchestrator | 06:19:53.271 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-19 06:19:53.271838 | orchestrator | 06:19:53.271 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 06:19:53.271875 | orchestrator | 06:19:53.271 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:53.271911 | orchestrator | 06:19:53.271 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 06:19:53.271945 | orchestrator | 06:19:53.271 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 06:19:53.271979 | orchestrator | 06:19:53.271 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.272015 | orchestrator | 06:19:53.271 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 06:19:53.272049 | orchestrator | 06:19:53.272 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 06:19:53.272090 | orchestrator | 06:19:53.272 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 06:19:53.272120 | orchestrator | 06:19:53.272 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:53.272158 | orchestrator | 06:19:53.272 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.272191 | orchestrator | 06:19:53.272 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 06:19:53.272227 | orchestrator | 06:19:53.272 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:53.272263 | orchestrator | 06:19:53.272 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:53.272299 | orchestrator | 06:19:53.272 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:53.272333 | orchestrator | 06:19:53.272 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.272367 | orchestrator | 06:19:53.272 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 06:19:53.272403 | orchestrator | 06:19:53.272 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.272421 | orchestrator | 06:19:53.272 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.272450 | orchestrator | 06:19:53.272 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 06:19:53.272462 | orchestrator | 06:19:53.272 STDOUT terraform:  } 2025-09-19 06:19:53.272477 | orchestrator | 06:19:53.272 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.272505 | orchestrator | 06:19:53.272 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 06:19:53.272512 | orchestrator | 06:19:53.272 STDOUT terraform:  } 2025-09-19 06:19:53.272533 | orchestrator | 06:19:53.272 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.272560 | orchestrator | 06:19:53.272 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 06:19:53.272573 | orchestrator | 06:19:53.272 STDOUT terraform:  } 2025-09-19 06:19:53.272593 | orchestrator | 06:19:53.272 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.272620 | orchestrator | 06:19:53.272 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 06:19:53.272633 | orchestrator | 06:19:53.272 STDOUT terraform:  } 2025-09-19 06:19:53.272657 | orchestrator | 06:19:53.272 STDOUT terraform:  + binding (known after apply) 2025-09-19 06:19:53.272671 | orchestrator | 06:19:53.272 STDOUT terraform:  + fixed_ip { 2025-09-19 06:19:53.272694 | orchestrator | 06:19:53.272 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-19 06:19:53.272724 | orchestrator | 06:19:53.272 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:53.272731 | orchestrator | 06:19:53.272 STDOUT terraform:  } 2025-09-19 06:19:53.272746 | orchestrator | 06:19:53.272 STDOUT terraform:  } 2025-09-19 06:19:53.272793 | orchestrator | 06:19:53.272 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-19 06:19:53.272836 | orchestrator | 06:19:53.272 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 06:19:53.272880 | orchestrator | 06:19:53.272 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:53.286071 | orchestrator | 06:19:53.272 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 06:19:53.286110 | orchestrator | 06:19:53.282 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 06:19:53.286115 | orchestrator | 06:19:53.282 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.286119 | orchestrator | 06:19:53.282 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 06:19:53.286123 | orchestrator | 06:19:53.282 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 06:19:53.286127 | orchestrator | 06:19:53.282 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 06:19:53.286131 | orchestrator | 06:19:53.282 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:53.286135 | orchestrator | 06:19:53.282 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.286138 | orchestrator | 06:19:53.282 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 06:19:53.286142 | orchestrator | 06:19:53.282 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:53.286149 | orchestrator | 06:19:53.282 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:53.286161 | orchestrator | 06:19:53.282 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:53.286165 | orchestrator | 06:19:53.282 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.286169 | orchestrator | 06:19:53.282 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 06:19:53.286173 | orchestrator | 06:19:53.282 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.286177 | orchestrator | 06:19:53.282 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.286181 | orchestrator | 06:19:53.282 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 06:19:53.286185 | orchestrator | 06:19:53.282 STDOUT terraform:  } 2025-09-19 06:19:53.286189 | orchestrator | 06:19:53.282 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.286193 | orchestrator | 06:19:53.282 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 06:19:53.286196 | orchestrator | 06:19:53.282 STDOUT terraform:  } 2025-09-19 06:19:53.286200 | orchestrator | 06:19:53.282 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.286204 | orchestrator | 06:19:53.282 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 06:19:53.286208 | orchestrator | 06:19:53.282 STDOUT terraform:  } 2025-09-19 06:19:53.286211 | orchestrator | 06:19:53.282 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.286215 | orchestrator | 06:19:53.282 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 06:19:53.286219 | orchestrator | 06:19:53.282 STDOUT terraform:  } 2025-09-19 06:19:53.286223 | orchestrator | 06:19:53.282 STDOUT terraform:  + binding (known after apply) 2025-09-19 06:19:53.286227 | orchestrator | 06:19:53.282 STDOUT terraform:  + fixed_ip { 2025-09-19 06:19:53.286230 | orchestrator | 06:19:53.282 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-19 06:19:53.286234 | orchestrator | 06:19:53.282 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:53.286238 | orchestrator | 06:19:53.282 STDOUT terraform:  } 2025-09-19 06:19:53.286242 | orchestrator | 06:19:53.282 STDOUT terraform:  } 2025-09-19 06:19:53.286246 | orchestrator | 06:19:53.282 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-19 06:19:53.286250 | orchestrator | 06:19:53.282 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 06:19:53.286253 | orchestrator | 06:19:53.282 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:53.286257 | orchestrator | 06:19:53.283 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 06:19:53.286261 | orchestrator | 06:19:53.283 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 06:19:53.286274 | orchestrator | 06:19:53.283 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.286278 | orchestrator | 06:19:53.283 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 06:19:53.286281 | orchestrator | 06:19:53.283 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 06:19:53.286285 | orchestrator | 06:19:53.283 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 06:19:53.286293 | orchestrator | 06:19:53.283 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 06:19:53.286297 | orchestrator | 06:19:53.283 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.286301 | orchestrator | 06:19:53.283 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 06:19:53.286304 | orchestrator | 06:19:53.283 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:53.286308 | orchestrator | 06:19:53.283 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 06:19:53.286312 | orchestrator | 06:19:53.283 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 06:19:53.286316 | orchestrator | 06:19:53.283 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.286320 | orchestrator | 06:19:53.283 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 06:19:53.286323 | orchestrator | 06:19:53.283 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.286327 | orchestrator | 06:19:53.283 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.286331 | orchestrator | 06:19:53.283 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 06:19:53.286335 | orchestrator | 06:19:53.283 STDOUT terraform:  } 2025-09-19 06:19:53.286339 | orchestrator | 06:19:53.283 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.286343 | orchestrator | 06:19:53.283 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 06:19:53.286347 | orchestrator | 06:19:53.283 STDOUT terraform:  } 2025-09-19 06:19:53.286350 | orchestrator | 06:19:53.283 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.286354 | orchestrator | 06:19:53.283 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 06:19:53.286358 | orchestrator | 06:19:53.283 STDOUT terraform:  } 2025-09-19 06:19:53.286362 | orchestrator | 06:19:53.283 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 06:19:53.286365 | orchestrator | 06:19:53.283 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 06:19:53.286369 | orchestrator | 06:19:53.283 STDOUT terraform:  } 2025-09-19 06:19:53.286373 | orchestrator | 06:19:53.283 STDOUT terraform:  + binding (known after apply) 2025-09-19 06:19:53.286377 | orchestrator | 06:19:53.283 STDOUT terraform:  + fixed_ip { 2025-09-19 06:19:53.286381 | orchestrator | 06:19:53.283 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-19 06:19:53.286384 | orchestrator | 06:19:53.283 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:53.286388 | orchestrator | 06:19:53.283 STDOUT terraform:  } 2025-09-19 06:19:53.286392 | orchestrator | 06:19:53.283 STDOUT terraform:  } 2025-09-19 06:19:53.286396 | orchestrator | 06:19:53.283 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-19 06:19:53.286399 | orchestrator | 06:19:53.283 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-19 06:19:53.286403 | orchestrator | 06:19:53.283 STDOUT terraform:  + force_destroy = false 2025-09-19 06:19:53.286407 | orchestrator | 06:19:53.283 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.286414 | orchestrator | 06:19:53.283 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 06:19:53.286418 | orchestrator | 06:19:53.283 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.286422 | orchestrator | 06:19:53.283 STDOUT terraform:  + router_id = (known after apply) 2025-09-19 06:19:53.286426 | orchestrator | 06:19:53.284 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 06:19:53.286429 | orchestrator | 06:19:53.284 STDOUT terraform:  } 2025-09-19 06:19:53.286438 | orchestrator | 06:19:53.284 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-19 06:19:53.286442 | orchestrator | 06:19:53.284 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-19 06:19:53.286446 | orchestrator | 06:19:53.284 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 06:19:53.286450 | orchestrator | 06:19:53.284 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.286454 | orchestrator | 06:19:53.284 STDOUT terraform:  + availability_zone_hints = [ 2025-09-19 06:19:53.286459 | orchestrator | 06:19:53.284 STDOUT terraform:  + "nova", 2025-09-19 06:19:53.286463 | orchestrator | 06:19:53.284 STDOUT terraform:  ] 2025-09-19 06:19:53.286467 | orchestrator | 06:19:53.284 STDOUT terraform:  + distributed = (known after apply) 2025-09-19 06:19:53.286470 | orchestrator | 06:19:53.284 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-19 06:19:53.286474 | orchestrator | 06:19:53.284 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-19 06:19:53.286483 | orchestrator | 06:19:53.284 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-19 06:19:53.286487 | orchestrator | 06:19:53.284 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.286491 | orchestrator | 06:19:53.284 STDOUT terraform:  + name = "testbed" 2025-09-19 06:19:53.286494 | orchestrator | 06:19:53.284 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.286498 | orchestrator | 06:19:53.284 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.286502 | orchestrator | 06:19:53.284 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-19 06:19:53.286506 | orchestrator | 06:19:53.284 STDOUT terraform:  } 2025-09-19 06:19:53.286509 | orchestrator | 06:19:53.284 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-19 06:19:53.286514 | orchestrator | 06:19:53.284 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-19 06:19:53.286518 | orchestrator | 06:19:53.284 STDOUT terraform:  + description = "ssh" 2025-09-19 06:19:53.286522 | orchestrator | 06:19:53.284 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:53.286525 | orchestrator | 06:19:53.284 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:53.286529 | orchestrator | 06:19:53.284 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.286533 | orchestrator | 06:19:53.284 STDOUT terraform:  + port_range_max = 22 2025-09-19 06:19:53.286537 | orchestrator | 06:19:53.284 STDOUT terraform:  + port_range_min = 22 2025-09-19 06:19:53.286544 | orchestrator | 06:19:53.284 STDOUT terraform:  + protocol = "tcp" 2025-09-19 06:19:53.286548 | orchestrator | 06:19:53.284 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.286552 | orchestrator | 06:19:53.284 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:53.286555 | orchestrator | 06:19:53.284 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:53.286559 | orchestrator | 06:19:53.284 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 06:19:53.286563 | orchestrator | 06:19:53.284 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:53.286567 | orchestrator | 06:19:53.284 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.286570 | orchestrator | 06:19:53.284 STDOUT terraform:  } 2025-09-19 06:19:53.286574 | orchestrator | 06:19:53.284 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-19 06:19:53.286578 | orchestrator | 06:19:53.285 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-19 06:19:53.286585 | orchestrator | 06:19:53.285 STDOUT terraform:  + description = "wireguard" 2025-09-19 06:19:53.286588 | orchestrator | 06:19:53.285 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:53.286592 | orchestrator | 06:19:53.285 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:53.286596 | orchestrator | 06:19:53.285 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.286600 | orchestrator | 06:19:53.285 STDOUT terraform:  + port_range_max = 51820 2025-09-19 06:19:53.286604 | orchestrator | 06:19:53.285 STDOUT terraform:  + port_range_min = 51820 2025-09-19 06:19:53.286607 | orchestrator | 06:19:53.285 STDOUT terraform:  + protocol = "udp" 2025-09-19 06:19:53.286611 | orchestrator | 06:19:53.285 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.286615 | orchestrator | 06:19:53.285 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:53.286619 | orchestrator | 06:19:53.285 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:53.286622 | orchestrator | 06:19:53.285 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 06:19:53.286631 | orchestrator | 06:19:53.285 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:53.286635 | orchestrator | 06:19:53.285 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.286639 | orchestrator | 06:19:53.285 STDOUT terraform:  } 2025-09-19 06:19:53.286643 | orchestrator | 06:19:53.285 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_ru 2025-09-19 06:19:53.286647 | orchestrator | 06:19:53.285 STDOUT terraform: le3 will be created 2025-09-19 06:19:53.286651 | orchestrator | 06:19:53.285 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-19 06:19:53.286654 | orchestrator | 06:19:53.285 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:53.286658 | orchestrator | 06:19:53.285 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:53.286666 | orchestrator | 06:19:53.285 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.286670 | orchestrator | 06:19:53.285 STDOUT terraform:  + protocol = "tcp" 2025-09-19 06:19:53.286674 | orchestrator | 06:19:53.285 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.286678 | orchestrator | 06:19:53.285 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:53.286682 | orchestrator | 06:19:53.285 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:53.286685 | orchestrator | 06:19:53.285 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-19 06:19:53.286689 | orchestrator | 06:19:53.285 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:53.286693 | orchestrator | 06:19:53.285 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.286696 | orchestrator | 06:19:53.285 STDOUT terraform:  } 2025-09-19 06:19:53.286700 | orchestrator | 06:19:53.285 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-19 06:19:53.286704 | orchestrator | 06:19:53.285 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-19 06:19:53.286708 | orchestrator | 06:19:53.285 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:53.286712 | orchestrator | 06:19:53.286 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:53.286715 | orchestrator | 06:19:53.286 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.286719 | orchestrator | 06:19:53.286 STDOUT terraform:  + protocol = "udp" 2025-09-19 06:19:53.286723 | orchestrator | 06:19:53.286 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.286727 | orchestrator | 06:19:53.286 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:53.286733 | orchestrator | 06:19:53.286 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:53.286737 | orchestrator | 06:19:53.286 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-19 06:19:53.286741 | orchestrator | 06:19:53.286 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:53.286745 | orchestrator | 06:19:53.286 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.286748 | orchestrator | 06:19:53.286 STDOUT terraform:  } 2025-09-19 06:19:53.286752 | orchestrator | 06:19:53.286 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-19 06:19:53.286756 | orchestrator | 06:19:53.286 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-19 06:19:53.286760 | orchestrator | 06:19:53.286 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:53.286764 | orchestrator | 06:19:53.286 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:53.286767 | orchestrator | 06:19:53.286 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.286771 | orchestrator | 06:19:53.286 STDOUT terraform:  + protocol = "icmp" 2025-09-19 06:19:53.286781 | orchestrator | 06:19:53.286 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.286784 | orchestrator | 06:19:53.286 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:53.286788 | orchestrator | 06:19:53.286 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:53.286792 | orchestrator | 06:19:53.286 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 06:19:53.286796 | orchestrator | 06:19:53.286 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:53.286800 | orchestrator | 06:19:53.286 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.286803 | orchestrator | 06:19:53.286 STDOUT terraform:  } 2025-09-19 06:19:53.286809 | orchestrator | 06:19:53.286 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-19 06:19:53.286815 | orchestrator | 06:19:53.286 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-19 06:19:53.286860 | orchestrator | 06:19:53.286 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:53.286884 | orchestrator | 06:19:53.286 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:53.286919 | orchestrator | 06:19:53.286 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.286944 | orchestrator | 06:19:53.286 STDOUT terraform:  + protocol = "tcp" 2025-09-19 06:19:53.286980 | orchestrator | 06:19:53.286 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.287015 | orchestrator | 06:19:53.286 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:53.287051 | orchestrator | 06:19:53.287 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:53.287079 | orchestrator | 06:19:53.287 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 06:19:53.287114 | orchestrator | 06:19:53.287 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:53.287150 | orchestrator | 06:19:53.287 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.287156 | orchestrator | 06:19:53.287 STDOUT terraform:  } 2025-09-19 06:19:53.287210 | orchestrator | 06:19:53.287 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-19 06:19:53.287281 | orchestrator | 06:19:53.287 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-19 06:19:53.287324 | orchestrator | 06:19:53.287 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:53.287361 | orchestrator | 06:19:53.287 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:53.287424 | orchestrator | 06:19:53.287 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.287456 | orchestrator | 06:19:53.287 STDOUT terraform:  + protocol = "udp" 2025-09-19 06:19:53.287493 | orchestrator | 06:19:53.287 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.287528 | orchestrator | 06:19:53.287 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:53.287563 | orchestrator | 06:19:53.287 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:53.287591 | orchestrator | 06:19:53.287 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 06:19:53.287627 | orchestrator | 06:19:53.287 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:53.287663 | orchestrator | 06:19:53.287 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.287669 | orchestrator | 06:19:53.287 STDOUT terraform:  } 2025-09-19 06:19:53.287721 | orchestrator | 06:19:53.287 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-19 06:19:53.287772 | orchestrator | 06:19:53.287 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-19 06:19:53.287800 | orchestrator | 06:19:53.287 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:53.287825 | orchestrator | 06:19:53.287 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:53.287889 | orchestrator | 06:19:53.287 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.287916 | orchestrator | 06:19:53.287 STDOUT terraform:  + protocol = "icmp" 2025-09-19 06:19:53.287952 | orchestrator | 06:19:53.287 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.287986 | orchestrator | 06:19:53.287 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:53.288023 | orchestrator | 06:19:53.287 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:53.288051 | orchestrator | 06:19:53.288 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 06:19:53.288086 | orchestrator | 06:19:53.288 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:53.288121 | orchestrator | 06:19:53.288 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.288127 | orchestrator | 06:19:53.288 STDOUT terraform:  } 2025-09-19 06:19:53.288179 | orchestrator | 06:19:53.288 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-19 06:19:53.288227 | orchestrator | 06:19:53.288 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-19 06:19:53.288251 | orchestrator | 06:19:53.288 STDOUT terraform:  + description = "vrrp" 2025-09-19 06:19:53.288282 | orchestrator | 06:19:53.288 STDOUT terraform:  + direction = "ingress" 2025-09-19 06:19:53.288325 | orchestrator | 06:19:53.288 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 06:19:53.288346 | orchestrator | 06:19:53.288 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.288370 | orchestrator | 06:19:53.288 STDOUT terraform:  + protocol = "112" 2025-09-19 06:19:53.288405 | orchestrator | 06:19:53.288 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.288439 | orchestrator | 06:19:53.288 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 06:19:53.288495 | orchestrator | 06:19:53.288 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 06:19:53.288524 | orchestrator | 06:19:53.288 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 06:19:53.288558 | orchestrator | 06:19:53.288 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 06:19:53.288593 | orchestrator | 06:19:53.288 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.288599 | orchestrator | 06:19:53.288 STDOUT terraform:  } 2025-09-19 06:19:53.288649 | orchestrator | 06:19:53.288 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-19 06:19:53.288695 | orchestrator | 06:19:53.288 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-19 06:19:53.288723 | orchestrator | 06:19:53.288 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.288756 | orchestrator | 06:19:53.288 STDOUT terraform:  + description = "management security group" 2025-09-19 06:19:53.288783 | orchestrator | 06:19:53.288 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.288812 | orchestrator | 06:19:53.288 STDOUT terraform:  + name = "testbed-management" 2025-09-19 06:19:53.288851 | orchestrator | 06:19:53.288 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.288879 | orchestrator | 06:19:53.288 STDOUT terraform:  + stateful = (known after apply) 2025-09-19 06:19:53.288906 | orchestrator | 06:19:53.288 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.288912 | orchestrator | 06:19:53.288 STDOUT terraform:  } 2025-09-19 06:19:53.288959 | orchestrator | 06:19:53.288 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-19 06:19:53.289006 | orchestrator | 06:19:53.288 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-19 06:19:53.289031 | orchestrator | 06:19:53.289 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.289059 | orchestrator | 06:19:53.289 STDOUT terraform:  + description = "node security group" 2025-09-19 06:19:53.289086 | orchestrator | 06:19:53.289 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.289109 | orchestrator | 06:19:53.289 STDOUT terraform:  + name = "testbed-node" 2025-09-19 06:19:53.289136 | orchestrator | 06:19:53.289 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.289162 | orchestrator | 06:19:53.289 STDOUT terraform:  + stateful = (known after apply) 2025-09-19 06:19:53.289190 | orchestrator | 06:19:53.289 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.289203 | orchestrator | 06:19:53.289 STDOUT terraform:  } 2025-09-19 06:19:53.289248 | orchestrator | 06:19:53.289 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-19 06:19:53.289306 | orchestrator | 06:19:53.289 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-19 06:19:53.289353 | orchestrator | 06:19:53.289 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 06:19:53.289396 | orchestrator | 06:19:53.289 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-19 06:19:53.289416 | orchestrator | 06:19:53.289 STDOUT terraform:  + dns_nameservers = [ 2025-09-19 06:19:53.289434 | orchestrator | 06:19:53.289 STDOUT terraform:  + "8.8.8.8", 2025-09-19 06:19:53.289445 | orchestrator | 06:19:53.289 STDOUT terraform:  + "9.9.9.9", 2025-09-19 06:19:53.289460 | orchestrator | 06:19:53.289 STDOUT terraform:  ] 2025-09-19 06:19:53.289483 | orchestrator | 06:19:53.289 STDOUT terraform:  + enable_dhcp = true 2025-09-19 06:19:53.289512 | orchestrator | 06:19:53.289 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-19 06:19:53.289561 | orchestrator | 06:19:53.289 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.289596 | orchestrator | 06:19:53.289 STDOUT terraform:  + ip_version = 4 2025-09-19 06:19:53.289640 | orchestrator | 06:19:53.289 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-19 06:19:53.289695 | orchestrator | 06:19:53.289 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-19 06:19:53.289734 | orchestrator | 06:19:53.289 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-19 06:19:53.289764 | orchestrator | 06:19:53.289 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 06:19:53.289786 | orchestrator | 06:19:53.289 STDOUT terraform:  + no_gateway = false 2025-09-19 06:19:53.289817 | orchestrator | 06:19:53.289 STDOUT terraform:  + region = (known after apply) 2025-09-19 06:19:53.289859 | orchestrator | 06:19:53.289 STDOUT terraform:  + service_types = (known after apply) 2025-09-19 06:19:53.289887 | orchestrator | 06:19:53.289 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 06:19:53.289905 | orchestrator | 06:19:53.289 STDOUT terraform:  + allocation_pool { 2025-09-19 06:19:53.289930 | orchestrator | 06:19:53.289 STDOUT terraform:  + end = "192.168.31.250" 2025-09-19 06:19:53.289953 | orchestrator | 06:19:53.289 STDOUT terraform:  + start = "192.168.31.200" 2025-09-19 06:19:53.289961 | orchestrator | 06:19:53.289 STDOUT terraform:  } 2025-09-19 06:19:53.289968 | orchestrator | 06:19:53.289 STDOUT terraform:  } 2025-09-19 06:19:53.289995 | orchestrator | 06:19:53.289 STDOUT terraform:  # terraform_data.image will be created 2025-09-19 06:19:53.290042 | orchestrator | 06:19:53.289 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-19 06:19:53.290053 | orchestrator | 06:19:53.290 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.290073 | orchestrator | 06:19:53.290 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-19 06:19:53.290096 | orchestrator | 06:19:53.290 STDOUT terraform:  + output = (known after apply) 2025-09-19 06:19:53.290109 | orchestrator | 06:19:53.290 STDOUT terraform:  } 2025-09-19 06:19:53.290142 | orchestrator | 06:19:53.290 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-19 06:19:53.290165 | orchestrator | 06:19:53.290 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-19 06:19:53.290188 | orchestrator | 06:19:53.290 STDOUT terraform:  + id = (known after apply) 2025-09-19 06:19:53.290209 | orchestrator | 06:19:53.290 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-19 06:19:53.290232 | orchestrator | 06:19:53.290 STDOUT terraform:  + output = (known after apply) 2025-09-19 06:19:53.290246 | orchestrator | 06:19:53.290 STDOUT terraform:  } 2025-09-19 06:19:53.290276 | orchestrator | 06:19:53.290 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-19 06:19:53.290287 | orchestrator | 06:19:53.290 STDOUT terraform: Changes to Outputs: 2025-09-19 06:19:53.290311 | orchestrator | 06:19:53.290 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-19 06:19:53.290335 | orchestrator | 06:19:53.290 STDOUT terraform:  + private_key = (sensitive value) 2025-09-19 06:19:53.490701 | orchestrator | 06:19:53.489 STDOUT terraform: terraform_data.image: Creating... 2025-09-19 06:19:53.490779 | orchestrator | 06:19:53.490 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-19 06:19:53.490786 | orchestrator | 06:19:53.490 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=b4369c4e-8f24-93fd-a83b-f4c2e4d1fedc] 2025-09-19 06:19:53.491315 | orchestrator | 06:19:53.491 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=e1ecd315-365b-9485-c2b9-17fe70edb89e] 2025-09-19 06:19:53.515919 | orchestrator | 06:19:53.515 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-19 06:19:53.522875 | orchestrator | 06:19:53.522 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-19 06:19:53.539970 | orchestrator | 06:19:53.539 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-19 06:19:53.546079 | orchestrator | 06:19:53.545 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-19 06:19:53.546873 | orchestrator | 06:19:53.546 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-19 06:19:53.548167 | orchestrator | 06:19:53.547 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-19 06:19:53.548832 | orchestrator | 06:19:53.548 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-19 06:19:53.550048 | orchestrator | 06:19:53.549 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-19 06:19:53.565040 | orchestrator | 06:19:53.564 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-19 06:19:53.565783 | orchestrator | 06:19:53.565 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-19 06:19:54.003594 | orchestrator | 06:19:54.003 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-19 06:19:54.010492 | orchestrator | 06:19:54.010 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-19 06:19:54.027904 | orchestrator | 06:19:54.027 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-19 06:19:54.033810 | orchestrator | 06:19:54.033 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-19 06:19:54.083552 | orchestrator | 06:19:54.083 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-09-19 06:19:54.087519 | orchestrator | 06:19:54.087 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-19 06:19:54.483423 | orchestrator | 06:19:54.483 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 0s [id=52cfab21-3b6b-47d5-b462-9a85282f0715] 2025-09-19 06:19:54.493172 | orchestrator | 06:19:54.493 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-19 06:19:57.188542 | orchestrator | 06:19:57.188 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=38f6fb83-908a-4dc2-a0dd-a3bb8d4e5dee] 2025-09-19 06:19:57.190077 | orchestrator | 06:19:57.189 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=c93c054d-d324-48de-9f46-886df7842ff7] 2025-09-19 06:19:57.205007 | orchestrator | 06:19:57.204 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-19 06:19:57.211582 | orchestrator | 06:19:57.211 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-19 06:19:57.212598 | orchestrator | 06:19:57.212 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=4dd49722-42e6-4e94-9106-a95d5116fdb0] 2025-09-19 06:19:57.219436 | orchestrator | 06:19:57.219 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-19 06:19:57.219496 | orchestrator | 06:19:57.219 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=6decddad06d924ea774f9090e05a826044501129] 2025-09-19 06:19:57.223565 | orchestrator | 06:19:57.223 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-19 06:19:57.239225 | orchestrator | 06:19:57.239 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=3567b0e7-c22b-4a61-9c89-3afd695b5400] 2025-09-19 06:19:57.239928 | orchestrator | 06:19:57.239 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=60eaf991-1ab4-4753-9c6a-a15ff08d271c] 2025-09-19 06:19:57.245938 | orchestrator | 06:19:57.245 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=5b11ce89-f193-4587-acb9-80845fc85b80] 2025-09-19 06:19:57.248417 | orchestrator | 06:19:57.248 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-19 06:19:57.250403 | orchestrator | 06:19:57.250 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-19 06:19:57.255463 | orchestrator | 06:19:57.255 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=8880ef57304a88718a6df1237494828f6c1502b7] 2025-09-19 06:19:57.257084 | orchestrator | 06:19:57.256 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-19 06:19:57.264144 | orchestrator | 06:19:57.264 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-19 06:19:57.303114 | orchestrator | 06:19:57.301 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=efb009a3-4323-4607-93cb-907bed8bb1e3] 2025-09-19 06:19:57.313415 | orchestrator | 06:19:57.313 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-19 06:19:57.352478 | orchestrator | 06:19:57.352 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=1cf24504-b3f3-4e87-bda4-4a150d83b5cd] 2025-09-19 06:19:57.355054 | orchestrator | 06:19:57.354 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=b81412c7-c90d-434c-bce7-fcbaa76ae3c0] 2025-09-19 06:19:57.816660 | orchestrator | 06:19:57.816 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=8330c998-6c28-4c78-b448-3f09403c0ea6] 2025-09-19 06:19:58.248745 | orchestrator | 06:19:58.248 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=d00fb83d-1489-4afa-bc02-190a0b522128] 2025-09-19 06:19:58.255835 | orchestrator | 06:19:58.255 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-19 06:20:00.591563 | orchestrator | 06:20:00.591 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=808b2dc9-0ff9-481c-981a-fd6b77cc5192] 2025-09-19 06:20:00.612576 | orchestrator | 06:20:00.612 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=c88b683f-dc1f-4f4c-815b-59025e141d37] 2025-09-19 06:20:00.633647 | orchestrator | 06:20:00.633 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=6d60e80d-2e2e-4d25-a1fd-9a57154def13] 2025-09-19 06:20:00.650124 | orchestrator | 06:20:00.649 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=cc255c2a-54c4-46d1-b37c-21de8fb436bf] 2025-09-19 06:20:00.655637 | orchestrator | 06:20:00.655 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=6b26d151-337e-426e-879e-20214fca4ff4] 2025-09-19 06:20:00.698909 | orchestrator | 06:20:00.698 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=9d9a42c6-6415-42b2-9cd6-58e920cd7387] 2025-09-19 06:20:01.624080 | orchestrator | 06:20:01.623 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 4s [id=407dc5e6-a6d4-4e3a-b19a-ad1e2268258b] 2025-09-19 06:20:01.627970 | orchestrator | 06:20:01.627 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-19 06:20:01.629485 | orchestrator | 06:20:01.629 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-19 06:20:01.632492 | orchestrator | 06:20:01.632 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-19 06:20:01.831658 | orchestrator | 06:20:01.831 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=49fd3c92-cbe2-482c-a75e-14a80c224bc6] 2025-09-19 06:20:01.840127 | orchestrator | 06:20:01.839 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-19 06:20:01.840908 | orchestrator | 06:20:01.840 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-19 06:20:01.843230 | orchestrator | 06:20:01.843 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-19 06:20:01.844474 | orchestrator | 06:20:01.844 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-19 06:20:01.844890 | orchestrator | 06:20:01.844 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-19 06:20:01.855365 | orchestrator | 06:20:01.855 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-19 06:20:01.989373 | orchestrator | 06:20:01.988 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=79ed47df-fd78-4507-91d5-6c41c8a5eae3] 2025-09-19 06:20:02.399133 | orchestrator | 06:20:02.398 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=0fcaf5a0-0d82-41b6-9b8d-d2304e03b854] 2025-09-19 06:20:02.406611 | orchestrator | 06:20:02.406 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-19 06:20:02.410375 | orchestrator | 06:20:02.410 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-19 06:20:02.420743 | orchestrator | 06:20:02.420 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-19 06:20:02.421722 | orchestrator | 06:20:02.421 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-19 06:20:02.441242 | orchestrator | 06:20:02.441 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=a39759e9-60c4-478a-a2b4-8a556db4f1f4] 2025-09-19 06:20:02.452351 | orchestrator | 06:20:02.452 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-19 06:20:02.536172 | orchestrator | 06:20:02.535 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=eb54318f-7ed0-4e9b-9936-4b47679a6cb8] 2025-09-19 06:20:02.549390 | orchestrator | 06:20:02.549 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-19 06:20:02.679564 | orchestrator | 06:20:02.679 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=0bf07b8c-30a3-4dc7-8001-159333cd14b7] 2025-09-19 06:20:02.696175 | orchestrator | 06:20:02.695 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-19 06:20:02.783224 | orchestrator | 06:20:02.782 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=a8a9d671-e8fb-46dd-a29e-51e533fe8959] 2025-09-19 06:20:02.802334 | orchestrator | 06:20:02.801 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-19 06:20:02.983794 | orchestrator | 06:20:02.983 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=0ecb05fa-0287-4ea4-ac57-75d1bc7914ac] 2025-09-19 06:20:02.990112 | orchestrator | 06:20:02.989 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-19 06:20:03.145976 | orchestrator | 06:20:03.145 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=0e4287c7-6beb-4196-ab2d-6ef282ff2947] 2025-09-19 06:20:03.154216 | orchestrator | 06:20:03.153 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-19 06:20:03.155176 | orchestrator | 06:20:03.154 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=b36e2575-941f-447c-b53f-ba68f3d04f1d] 2025-09-19 06:20:03.191311 | orchestrator | 06:20:03.190 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=f4954f3e-6b74-4831-aa6d-ade5d01bd25a] 2025-09-19 06:20:03.210113 | orchestrator | 06:20:03.209 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=769eb989-4499-458b-ac96-b45c11b80af5] 2025-09-19 06:20:03.259415 | orchestrator | 06:20:03.259 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=52ffb3ae-9c33-4f38-8c3a-4759b7ddc27b] 2025-09-19 06:20:03.527055 | orchestrator | 06:20:03.526 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=aca3a522-a690-49eb-bfb9-321e2e242acb] 2025-09-19 06:20:03.737759 | orchestrator | 06:20:03.737 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=b0cdafed-3584-4384-8bf1-cc0b339f605a] 2025-09-19 06:20:03.752807 | orchestrator | 06:20:03.752 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=4dd65f6d-316e-4de9-9c92-d1dd4ece4225] 2025-09-19 06:20:03.855480 | orchestrator | 06:20:03.855 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=0d861418-88af-4f7c-ae36-bda95b00e6a8] 2025-09-19 06:20:04.097686 | orchestrator | 06:20:04.097 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=e2cd11fc-d85e-4470-88e1-8f6d95cc6539] 2025-09-19 06:20:04.659973 | orchestrator | 06:20:04.659 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=2f3789ef-362a-4d80-b22e-7b1a0a96e7c2] 2025-09-19 06:20:04.696298 | orchestrator | 06:20:04.696 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-19 06:20:04.696793 | orchestrator | 06:20:04.696 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-19 06:20:04.698908 | orchestrator | 06:20:04.698 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-19 06:20:04.698956 | orchestrator | 06:20:04.698 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-19 06:20:04.717785 | orchestrator | 06:20:04.717 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-19 06:20:04.753781 | orchestrator | 06:20:04.753 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-19 06:20:04.754624 | orchestrator | 06:20:04.754 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-19 06:20:06.803195 | orchestrator | 06:20:06.802 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=5b049806-9e03-42ef-b15f-aa21614644a2] 2025-09-19 06:20:06.820859 | orchestrator | 06:20:06.820 STDOUT terraform: local_file.inventory: Creating... 2025-09-19 06:20:06.821234 | orchestrator | 06:20:06.821 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-19 06:20:06.821965 | orchestrator | 06:20:06.821 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-19 06:20:06.824693 | orchestrator | 06:20:06.824 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=fe2a952a6f0f5f6a3646ed4da1634ea1f1d5d44e] 2025-09-19 06:20:06.826630 | orchestrator | 06:20:06.826 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=80450202940fbbb71bc73e933b3559537030a87c] 2025-09-19 06:20:07.818373 | orchestrator | 06:20:07.817 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=5b049806-9e03-42ef-b15f-aa21614644a2] 2025-09-19 06:20:14.700757 | orchestrator | 06:20:14.700 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-19 06:20:14.701050 | orchestrator | 06:20:14.700 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-19 06:20:14.715598 | orchestrator | 06:20:14.715 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-19 06:20:14.730106 | orchestrator | 06:20:14.729 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-19 06:20:14.758462 | orchestrator | 06:20:14.758 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-19 06:20:14.758572 | orchestrator | 06:20:14.758 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-19 06:20:24.703343 | orchestrator | 06:20:24.702 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-19 06:20:24.703494 | orchestrator | 06:20:24.703 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-19 06:20:24.716500 | orchestrator | 06:20:24.716 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-19 06:20:24.731171 | orchestrator | 06:20:24.730 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-19 06:20:24.759526 | orchestrator | 06:20:24.759 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-19 06:20:24.759775 | orchestrator | 06:20:24.759 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-19 06:20:25.265005 | orchestrator | 06:20:25.264 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=a67bb0ad-0d46-468e-a7f9-46dedbfae1b9] 2025-09-19 06:20:34.703553 | orchestrator | 06:20:34.703 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-19 06:20:34.703918 | orchestrator | 06:20:34.703 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-19 06:20:34.716698 | orchestrator | 06:20:34.716 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-19 06:20:34.760233 | orchestrator | 06:20:34.759 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-19 06:20:34.760503 | orchestrator | 06:20:34.760 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-19 06:20:35.386227 | orchestrator | 06:20:35.385 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=44c6fde9-5429-4525-82af-cd487d1a498b] 2025-09-19 06:20:35.431288 | orchestrator | 06:20:35.431 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=7260a3e9-9da1-46cc-9717-d152fd0c09b7] 2025-09-19 06:20:35.621766 | orchestrator | 06:20:35.621 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=f1dfd634-2c1c-439d-a7fb-0c7fbe24e687] 2025-09-19 06:20:44.704060 | orchestrator | 06:20:44.703 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-09-19 06:20:44.717143 | orchestrator | 06:20:44.716 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-09-19 06:20:45.626213 | orchestrator | 06:20:45.626 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=a9c12167-a569-4ef8-9d10-abe668b0e860] 2025-09-19 06:20:54.719940 | orchestrator | 06:20:54.719 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2025-09-19 06:20:55.953700 | orchestrator | 06:20:55.953 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 51s [id=042afe40-4d0a-4c22-b939-d1bcfee26518] 2025-09-19 06:20:55.976919 | orchestrator | 06:20:55.976 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-19 06:20:55.980291 | orchestrator | 06:20:55.980 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-19 06:20:55.982984 | orchestrator | 06:20:55.982 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-19 06:20:55.984488 | orchestrator | 06:20:55.984 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-19 06:20:55.987412 | orchestrator | 06:20:55.987 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-19 06:20:55.992243 | orchestrator | 06:20:55.992 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-19 06:20:55.994682 | orchestrator | 06:20:55.994 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-19 06:20:55.997549 | orchestrator | 06:20:55.997 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3776198684567082751] 2025-09-19 06:20:55.999088 | orchestrator | 06:20:55.998 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-19 06:20:55.999370 | orchestrator | 06:20:55.999 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-19 06:20:56.015078 | orchestrator | 06:20:56.014 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-19 06:20:56.029823 | orchestrator | 06:20:56.029 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-19 06:20:59.421387 | orchestrator | 06:20:59.421 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=44c6fde9-5429-4525-82af-cd487d1a498b/b81412c7-c90d-434c-bce7-fcbaa76ae3c0] 2025-09-19 06:20:59.437602 | orchestrator | 06:20:59.437 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=a67bb0ad-0d46-468e-a7f9-46dedbfae1b9/efb009a3-4323-4607-93cb-907bed8bb1e3] 2025-09-19 06:20:59.450243 | orchestrator | 06:20:59.449 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=042afe40-4d0a-4c22-b939-d1bcfee26518/5b11ce89-f193-4587-acb9-80845fc85b80] 2025-09-19 06:20:59.483465 | orchestrator | 06:20:59.483 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=44c6fde9-5429-4525-82af-cd487d1a498b/38f6fb83-908a-4dc2-a0dd-a3bb8d4e5dee] 2025-09-19 06:20:59.490170 | orchestrator | 06:20:59.489 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=a67bb0ad-0d46-468e-a7f9-46dedbfae1b9/60eaf991-1ab4-4753-9c6a-a15ff08d271c] 2025-09-19 06:20:59.502360 | orchestrator | 06:20:59.501 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=042afe40-4d0a-4c22-b939-d1bcfee26518/1cf24504-b3f3-4e87-bda4-4a150d83b5cd] 2025-09-19 06:21:05.586355 | orchestrator | 06:21:05.585 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=44c6fde9-5429-4525-82af-cd487d1a498b/c93c054d-d324-48de-9f46-886df7842ff7] 2025-09-19 06:21:05.587402 | orchestrator | 06:21:05.587 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=042afe40-4d0a-4c22-b939-d1bcfee26518/4dd49722-42e6-4e94-9106-a95d5116fdb0] 2025-09-19 06:21:05.613182 | orchestrator | 06:21:05.612 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=a67bb0ad-0d46-468e-a7f9-46dedbfae1b9/3567b0e7-c22b-4a61-9c89-3afd695b5400] 2025-09-19 06:21:06.017499 | orchestrator | 06:21:06.017 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-19 06:21:16.018598 | orchestrator | 06:21:16.018 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-19 06:21:16.421957 | orchestrator | 06:21:16.421 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=9a420a74-ab97-4677-ba51-47f419fc52eb] 2025-09-19 06:21:18.916280 | orchestrator | 06:21:18.914 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-19 06:21:18.916366 | orchestrator | 06:21:18.916 STDOUT terraform: Outputs: 2025-09-19 06:21:18.916383 | orchestrator | 06:21:18.916 STDOUT terraform: manager_address = 2025-09-19 06:21:18.916396 | orchestrator | 06:21:18.916 STDOUT terraform: private_key = 2025-09-19 06:21:19.237799 | orchestrator | ok: Runtime: 0:01:31.425888 2025-09-19 06:21:19.271524 | 2025-09-19 06:21:19.271640 | TASK [Create infrastructure (stable)] 2025-09-19 06:21:19.807085 | orchestrator | skipping: Conditional result was False 2025-09-19 06:21:19.824347 | 2025-09-19 06:21:19.824502 | TASK [Fetch manager address] 2025-09-19 06:21:20.237683 | orchestrator | ok 2025-09-19 06:21:20.247541 | 2025-09-19 06:21:20.247672 | TASK [Set manager_host address] 2025-09-19 06:21:20.317917 | orchestrator | ok 2025-09-19 06:21:20.327761 | 2025-09-19 06:21:20.327887 | LOOP [Update ansible collections] 2025-09-19 06:21:21.643620 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-19 06:21:21.643984 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 06:21:21.644041 | orchestrator | Starting galaxy collection install process 2025-09-19 06:21:21.644080 | orchestrator | Process install dependency map 2025-09-19 06:21:21.644115 | orchestrator | Starting collection install process 2025-09-19 06:21:21.644148 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-09-19 06:21:21.644187 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-09-19 06:21:21.644242 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-19 06:21:21.644324 | orchestrator | ok: Item: commons Runtime: 0:00:01.021358 2025-09-19 06:21:22.460491 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-19 06:21:22.460648 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 06:21:22.460719 | orchestrator | Starting galaxy collection install process 2025-09-19 06:21:22.460762 | orchestrator | Process install dependency map 2025-09-19 06:21:22.460801 | orchestrator | Starting collection install process 2025-09-19 06:21:22.460836 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-09-19 06:21:22.460870 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-09-19 06:21:22.460904 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-19 06:21:22.460956 | orchestrator | ok: Item: services Runtime: 0:00:00.546537 2025-09-19 06:21:22.478388 | 2025-09-19 06:21:22.478518 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-19 06:21:33.020271 | orchestrator | ok 2025-09-19 06:21:33.029957 | 2025-09-19 06:21:33.030099 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-19 06:22:33.074856 | orchestrator | ok 2025-09-19 06:22:33.084377 | 2025-09-19 06:22:33.084489 | TASK [Fetch manager ssh hostkey] 2025-09-19 06:22:34.656418 | orchestrator | Output suppressed because no_log was given 2025-09-19 06:22:34.664140 | 2025-09-19 06:22:34.664277 | TASK [Get ssh keypair from terraform environment] 2025-09-19 06:22:35.195654 | orchestrator | ok: Runtime: 0:00:00.008592 2025-09-19 06:22:35.210431 | 2025-09-19 06:22:35.210598 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-19 06:22:35.258438 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-19 06:22:35.268123 | 2025-09-19 06:22:35.268244 | TASK [Run manager part 0] 2025-09-19 06:22:36.125943 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 06:22:36.176613 | orchestrator | 2025-09-19 06:22:36.176662 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-19 06:22:36.176669 | orchestrator | 2025-09-19 06:22:36.176681 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-19 06:22:38.092432 | orchestrator | ok: [testbed-manager] 2025-09-19 06:22:38.092510 | orchestrator | 2025-09-19 06:22:38.092551 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-19 06:22:38.092570 | orchestrator | 2025-09-19 06:22:38.092589 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:22:39.964126 | orchestrator | ok: [testbed-manager] 2025-09-19 06:22:39.964269 | orchestrator | 2025-09-19 06:22:39.964290 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-19 06:22:40.636481 | orchestrator | ok: [testbed-manager] 2025-09-19 06:22:40.636532 | orchestrator | 2025-09-19 06:22:40.636543 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-19 06:22:40.676966 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:22:40.677023 | orchestrator | 2025-09-19 06:22:40.677036 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-19 06:22:40.703241 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:22:40.703287 | orchestrator | 2025-09-19 06:22:40.703295 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-19 06:22:40.726817 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:22:40.726904 | orchestrator | 2025-09-19 06:22:40.726911 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-19 06:22:40.747438 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:22:40.747487 | orchestrator | 2025-09-19 06:22:40.747496 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-19 06:22:40.768657 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:22:40.768700 | orchestrator | 2025-09-19 06:22:40.768707 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-19 06:22:40.791791 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:22:40.791914 | orchestrator | 2025-09-19 06:22:40.791934 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-19 06:22:40.827699 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:22:40.827744 | orchestrator | 2025-09-19 06:22:40.827751 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-19 06:22:41.593457 | orchestrator | changed: [testbed-manager] 2025-09-19 06:22:41.593515 | orchestrator | 2025-09-19 06:22:41.593524 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-19 06:25:08.205286 | orchestrator | changed: [testbed-manager] 2025-09-19 06:25:08.205370 | orchestrator | 2025-09-19 06:25:08.205388 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-19 06:26:30.303157 | orchestrator | changed: [testbed-manager] 2025-09-19 06:26:30.303228 | orchestrator | 2025-09-19 06:26:30.303239 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-19 06:26:52.589093 | orchestrator | changed: [testbed-manager] 2025-09-19 06:26:52.589163 | orchestrator | 2025-09-19 06:26:52.589181 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-19 06:27:01.212444 | orchestrator | changed: [testbed-manager] 2025-09-19 06:27:01.212480 | orchestrator | 2025-09-19 06:27:01.212489 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-19 06:27:01.250101 | orchestrator | ok: [testbed-manager] 2025-09-19 06:27:01.250134 | orchestrator | 2025-09-19 06:27:01.250143 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-19 06:27:01.945488 | orchestrator | ok: [testbed-manager] 2025-09-19 06:27:01.945546 | orchestrator | 2025-09-19 06:27:01.945563 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-19 06:27:02.569352 | orchestrator | changed: [testbed-manager] 2025-09-19 06:27:02.569405 | orchestrator | 2025-09-19 06:27:02.569420 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-19 06:27:08.454287 | orchestrator | changed: [testbed-manager] 2025-09-19 06:27:08.455441 | orchestrator | 2025-09-19 06:27:08.455527 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-19 06:27:14.444767 | orchestrator | changed: [testbed-manager] 2025-09-19 06:27:14.444819 | orchestrator | 2025-09-19 06:27:14.444857 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-19 06:27:17.167258 | orchestrator | changed: [testbed-manager] 2025-09-19 06:27:17.167298 | orchestrator | 2025-09-19 06:27:17.167305 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-19 06:27:19.087055 | orchestrator | changed: [testbed-manager] 2025-09-19 06:27:19.087148 | orchestrator | 2025-09-19 06:27:19.087181 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-19 06:27:20.437222 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-19 06:27:20.437316 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-19 06:27:20.437330 | orchestrator | 2025-09-19 06:27:20.437344 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-19 06:27:20.480729 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-19 06:27:20.480802 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-19 06:27:20.480815 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-19 06:27:20.480864 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-19 06:27:24.838673 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-19 06:27:24.838770 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-19 06:27:24.838786 | orchestrator | 2025-09-19 06:27:24.838798 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-19 06:27:25.431762 | orchestrator | changed: [testbed-manager] 2025-09-19 06:27:25.431927 | orchestrator | 2025-09-19 06:27:25.431950 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-19 06:28:47.888901 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-19 06:28:47.888952 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-19 06:28:47.888962 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-19 06:28:47.888969 | orchestrator | 2025-09-19 06:28:47.888977 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-19 06:28:50.091639 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-19 06:28:50.091664 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-19 06:28:50.091669 | orchestrator | 2025-09-19 06:28:50.091673 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-19 06:28:50.091677 | orchestrator | 2025-09-19 06:28:50.091681 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:28:51.421436 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:51.421532 | orchestrator | 2025-09-19 06:28:51.421549 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-19 06:28:51.448446 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:51.448519 | orchestrator | 2025-09-19 06:28:51.448533 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-19 06:28:51.496170 | orchestrator | ok: [testbed-manager] 2025-09-19 06:28:51.496237 | orchestrator | 2025-09-19 06:28:51.496251 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-19 06:28:52.226101 | orchestrator | changed: [testbed-manager] 2025-09-19 06:28:52.226176 | orchestrator | 2025-09-19 06:28:52.226192 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-19 06:28:52.942724 | orchestrator | changed: [testbed-manager] 2025-09-19 06:28:52.942811 | orchestrator | 2025-09-19 06:28:52.942852 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-19 06:28:54.370746 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-19 06:28:54.370798 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-19 06:28:54.370811 | orchestrator | 2025-09-19 06:28:54.370906 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-19 06:28:55.803766 | orchestrator | changed: [testbed-manager] 2025-09-19 06:28:55.803913 | orchestrator | 2025-09-19 06:28:55.803934 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-19 06:28:57.642702 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 06:28:57.642788 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-19 06:28:57.642802 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-19 06:28:57.642814 | orchestrator | 2025-09-19 06:28:57.642886 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-19 06:28:57.699444 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:28:57.699497 | orchestrator | 2025-09-19 06:28:57.699504 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-19 06:28:58.265764 | orchestrator | changed: [testbed-manager] 2025-09-19 06:28:58.265880 | orchestrator | 2025-09-19 06:28:58.265900 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-19 06:28:58.335290 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:28:58.335339 | orchestrator | 2025-09-19 06:28:58.335345 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-19 06:28:59.235418 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 06:28:59.235477 | orchestrator | changed: [testbed-manager] 2025-09-19 06:28:59.235486 | orchestrator | 2025-09-19 06:28:59.235494 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-19 06:28:59.274392 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:28:59.274444 | orchestrator | 2025-09-19 06:28:59.274452 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-19 06:28:59.311033 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:28:59.311086 | orchestrator | 2025-09-19 06:28:59.311095 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-19 06:28:59.348461 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:28:59.348511 | orchestrator | 2025-09-19 06:28:59.348519 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-19 06:28:59.400425 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:28:59.400535 | orchestrator | 2025-09-19 06:28:59.400567 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-19 06:29:00.129892 | orchestrator | ok: [testbed-manager] 2025-09-19 06:29:00.129941 | orchestrator | 2025-09-19 06:29:00.129948 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-19 06:29:00.129953 | orchestrator | 2025-09-19 06:29:00.129957 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:29:01.612906 | orchestrator | ok: [testbed-manager] 2025-09-19 06:29:01.612991 | orchestrator | 2025-09-19 06:29:01.613008 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-19 06:29:02.675365 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:02.675402 | orchestrator | 2025-09-19 06:29:02.675407 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:29:02.675413 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-19 06:29:02.675417 | orchestrator | 2025-09-19 06:29:03.041178 | orchestrator | ok: Runtime: 0:06:27.213560 2025-09-19 06:29:03.060944 | 2025-09-19 06:29:03.061087 | TASK [Point out that the log in on the manager is now possible] 2025-09-19 06:29:03.111073 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-19 06:29:03.127197 | 2025-09-19 06:29:03.127371 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-19 06:29:03.159230 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-19 06:29:03.166120 | 2025-09-19 06:29:03.166225 | TASK [Run manager part 1 + 2] 2025-09-19 06:29:04.033489 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 06:29:04.092369 | orchestrator | 2025-09-19 06:29:04.092448 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-19 06:29:04.092464 | orchestrator | 2025-09-19 06:29:04.092493 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:29:07.136960 | orchestrator | ok: [testbed-manager] 2025-09-19 06:29:07.137011 | orchestrator | 2025-09-19 06:29:07.137032 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-19 06:29:07.174686 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:29:07.174731 | orchestrator | 2025-09-19 06:29:07.174740 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-19 06:29:07.214123 | orchestrator | ok: [testbed-manager] 2025-09-19 06:29:07.214184 | orchestrator | 2025-09-19 06:29:07.214200 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 06:29:07.260313 | orchestrator | ok: [testbed-manager] 2025-09-19 06:29:07.260351 | orchestrator | 2025-09-19 06:29:07.260359 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 06:29:07.328436 | orchestrator | ok: [testbed-manager] 2025-09-19 06:29:07.328478 | orchestrator | 2025-09-19 06:29:07.328485 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 06:29:07.385770 | orchestrator | ok: [testbed-manager] 2025-09-19 06:29:07.385809 | orchestrator | 2025-09-19 06:29:07.385832 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 06:29:07.424524 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-19 06:29:07.424564 | orchestrator | 2025-09-19 06:29:07.424571 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 06:29:08.133417 | orchestrator | ok: [testbed-manager] 2025-09-19 06:29:08.133458 | orchestrator | 2025-09-19 06:29:08.133468 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 06:29:08.176720 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:29:08.176758 | orchestrator | 2025-09-19 06:29:08.176768 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 06:29:09.421369 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:09.421437 | orchestrator | 2025-09-19 06:29:09.421455 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 06:29:09.972586 | orchestrator | ok: [testbed-manager] 2025-09-19 06:29:09.972616 | orchestrator | 2025-09-19 06:29:09.972622 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 06:29:11.017114 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:11.017172 | orchestrator | 2025-09-19 06:29:11.017188 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 06:29:27.438168 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:27.438213 | orchestrator | 2025-09-19 06:29:27.438222 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-19 06:29:28.076439 | orchestrator | ok: [testbed-manager] 2025-09-19 06:29:28.076489 | orchestrator | 2025-09-19 06:29:28.076503 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-19 06:29:28.128148 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:29:28.128201 | orchestrator | 2025-09-19 06:29:28.128214 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-19 06:29:29.032596 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:29.032639 | orchestrator | 2025-09-19 06:29:29.032649 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-19 06:29:30.004102 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:30.004171 | orchestrator | 2025-09-19 06:29:30.004187 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-19 06:29:30.594351 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:30.594416 | orchestrator | 2025-09-19 06:29:30.594430 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-19 06:29:30.647438 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-19 06:29:30.647509 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-19 06:29:30.647518 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-19 06:29:30.647524 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-19 06:29:33.033277 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:33.033377 | orchestrator | 2025-09-19 06:29:33.033394 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-19 06:29:42.616714 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-19 06:29:42.616843 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-19 06:29:42.616864 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-19 06:29:42.616877 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-19 06:29:42.616899 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-19 06:29:42.616910 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-19 06:29:42.616922 | orchestrator | 2025-09-19 06:29:42.616935 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-19 06:29:43.720465 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:43.720559 | orchestrator | 2025-09-19 06:29:43.720576 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-19 06:29:43.757146 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:29:43.757225 | orchestrator | 2025-09-19 06:29:43.757240 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-19 06:29:46.739593 | orchestrator | changed: [testbed-manager] 2025-09-19 06:29:46.739647 | orchestrator | 2025-09-19 06:29:46.739654 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-19 06:29:46.778642 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:29:46.778938 | orchestrator | 2025-09-19 06:29:46.778963 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-19 06:31:25.168297 | orchestrator | changed: [testbed-manager] 2025-09-19 06:31:25.168470 | orchestrator | 2025-09-19 06:31:25.168492 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 06:31:26.473012 | orchestrator | ok: [testbed-manager] 2025-09-19 06:31:26.473053 | orchestrator | 2025-09-19 06:31:26.473060 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:31:26.473067 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-19 06:31:26.473073 | orchestrator | 2025-09-19 06:31:26.787881 | orchestrator | ok: Runtime: 0:02:23.087347 2025-09-19 06:31:26.806745 | 2025-09-19 06:31:26.806916 | TASK [Reboot manager] 2025-09-19 06:31:28.343808 | orchestrator | ok: Runtime: 0:00:00.989010 2025-09-19 06:31:28.351960 | 2025-09-19 06:31:28.352077 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-19 06:31:43.675545 | orchestrator | ok 2025-09-19 06:31:43.686269 | 2025-09-19 06:31:43.686392 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-19 06:32:43.728777 | orchestrator | ok 2025-09-19 06:32:43.738192 | 2025-09-19 06:32:43.738321 | TASK [Deploy manager + bootstrap nodes] 2025-09-19 06:32:46.390593 | orchestrator | 2025-09-19 06:32:46.390816 | orchestrator | # DEPLOY MANAGER 2025-09-19 06:32:46.390843 | orchestrator | 2025-09-19 06:32:46.390858 | orchestrator | + set -e 2025-09-19 06:32:46.390872 | orchestrator | + echo 2025-09-19 06:32:46.390887 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-19 06:32:46.390905 | orchestrator | + echo 2025-09-19 06:32:46.390957 | orchestrator | + cat /opt/manager-vars.sh 2025-09-19 06:32:46.393880 | orchestrator | export NUMBER_OF_NODES=6 2025-09-19 06:32:46.393908 | orchestrator | 2025-09-19 06:32:46.393922 | orchestrator | export CEPH_VERSION=reef 2025-09-19 06:32:46.393938 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-19 06:32:46.393951 | orchestrator | export MANAGER_VERSION=latest 2025-09-19 06:32:46.393973 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-19 06:32:46.393984 | orchestrator | 2025-09-19 06:32:46.394002 | orchestrator | export ARA=false 2025-09-19 06:32:46.394065 | orchestrator | export DEPLOY_MODE=manager 2025-09-19 06:32:46.394087 | orchestrator | export TEMPEST=false 2025-09-19 06:32:46.394099 | orchestrator | export IS_ZUUL=true 2025-09-19 06:32:46.394110 | orchestrator | 2025-09-19 06:32:46.394128 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.132 2025-09-19 06:32:46.394140 | orchestrator | export EXTERNAL_API=false 2025-09-19 06:32:46.394151 | orchestrator | 2025-09-19 06:32:46.394162 | orchestrator | export IMAGE_USER=ubuntu 2025-09-19 06:32:46.394177 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-19 06:32:46.394188 | orchestrator | 2025-09-19 06:32:46.394199 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-19 06:32:46.394217 | orchestrator | 2025-09-19 06:32:46.394228 | orchestrator | + echo 2025-09-19 06:32:46.394241 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 06:32:46.395113 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 06:32:46.395131 | orchestrator | ++ INTERACTIVE=false 2025-09-19 06:32:46.395144 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 06:32:46.395157 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 06:32:46.395335 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 06:32:46.395352 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 06:32:46.395377 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 06:32:46.395388 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 06:32:46.395399 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 06:32:46.395410 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 06:32:46.395422 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 06:32:46.395526 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 06:32:46.395541 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 06:32:46.395559 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 06:32:46.395578 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 06:32:46.395594 | orchestrator | ++ export ARA=false 2025-09-19 06:32:46.395605 | orchestrator | ++ ARA=false 2025-09-19 06:32:46.395616 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 06:32:46.395633 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 06:32:46.395643 | orchestrator | ++ export TEMPEST=false 2025-09-19 06:32:46.395655 | orchestrator | ++ TEMPEST=false 2025-09-19 06:32:46.395665 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 06:32:46.395680 | orchestrator | ++ IS_ZUUL=true 2025-09-19 06:32:46.395691 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.132 2025-09-19 06:32:46.395702 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.132 2025-09-19 06:32:46.395713 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 06:32:46.395724 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 06:32:46.395735 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 06:32:46.395746 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 06:32:46.395757 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 06:32:46.395767 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 06:32:46.395779 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 06:32:46.395813 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 06:32:46.395868 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-19 06:32:46.458276 | orchestrator | + docker version 2025-09-19 06:32:46.720775 | orchestrator | Client: Docker Engine - Community 2025-09-19 06:32:46.720935 | orchestrator | Version: 27.5.1 2025-09-19 06:32:46.720954 | orchestrator | API version: 1.47 2025-09-19 06:32:46.720967 | orchestrator | Go version: go1.22.11 2025-09-19 06:32:46.720978 | orchestrator | Git commit: 9f9e405 2025-09-19 06:32:46.720990 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-19 06:32:46.721003 | orchestrator | OS/Arch: linux/amd64 2025-09-19 06:32:46.721015 | orchestrator | Context: default 2025-09-19 06:32:46.721027 | orchestrator | 2025-09-19 06:32:46.721039 | orchestrator | Server: Docker Engine - Community 2025-09-19 06:32:46.721051 | orchestrator | Engine: 2025-09-19 06:32:46.721064 | orchestrator | Version: 27.5.1 2025-09-19 06:32:46.721075 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-19 06:32:46.721122 | orchestrator | Go version: go1.22.11 2025-09-19 06:32:46.721151 | orchestrator | Git commit: 4c9b3b0 2025-09-19 06:32:46.721169 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-19 06:32:46.721185 | orchestrator | OS/Arch: linux/amd64 2025-09-19 06:32:46.721204 | orchestrator | Experimental: false 2025-09-19 06:32:46.721221 | orchestrator | containerd: 2025-09-19 06:32:46.721240 | orchestrator | Version: 1.7.27 2025-09-19 06:32:46.721258 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-19 06:32:46.721274 | orchestrator | runc: 2025-09-19 06:32:46.721291 | orchestrator | Version: 1.2.5 2025-09-19 06:32:46.721309 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-19 06:32:46.721325 | orchestrator | docker-init: 2025-09-19 06:32:46.721341 | orchestrator | Version: 0.19.0 2025-09-19 06:32:46.721359 | orchestrator | GitCommit: de40ad0 2025-09-19 06:32:46.723744 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-19 06:32:46.732574 | orchestrator | + set -e 2025-09-19 06:32:46.732660 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 06:32:46.732686 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 06:32:46.732698 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 06:32:46.732709 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 06:32:46.732720 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 06:32:46.732731 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 06:32:46.732743 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 06:32:46.732755 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 06:32:46.732766 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 06:32:46.732776 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 06:32:46.732813 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 06:32:46.732832 | orchestrator | ++ export ARA=false 2025-09-19 06:32:46.732852 | orchestrator | ++ ARA=false 2025-09-19 06:32:46.732871 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 06:32:46.732891 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 06:32:46.732911 | orchestrator | ++ export TEMPEST=false 2025-09-19 06:32:46.732931 | orchestrator | ++ TEMPEST=false 2025-09-19 06:32:46.732953 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 06:32:46.732965 | orchestrator | ++ IS_ZUUL=true 2025-09-19 06:32:46.732976 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.132 2025-09-19 06:32:46.732987 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.132 2025-09-19 06:32:46.732998 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 06:32:46.733008 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 06:32:46.733019 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 06:32:46.733030 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 06:32:46.733041 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 06:32:46.733052 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 06:32:46.733063 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 06:32:46.733074 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 06:32:46.733094 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 06:32:46.733106 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 06:32:46.733117 | orchestrator | ++ INTERACTIVE=false 2025-09-19 06:32:46.733127 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 06:32:46.733143 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 06:32:46.733154 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 06:32:46.733165 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 06:32:46.733175 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-19 06:32:46.740261 | orchestrator | + set -e 2025-09-19 06:32:46.740316 | orchestrator | + VERSION=reef 2025-09-19 06:32:46.741351 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-19 06:32:46.750256 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-19 06:32:46.750318 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-19 06:32:46.755679 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-19 06:32:46.762165 | orchestrator | + set -e 2025-09-19 06:32:46.762235 | orchestrator | + VERSION=2024.2 2025-09-19 06:32:46.762965 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-19 06:32:46.766902 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-19 06:32:46.766995 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-19 06:32:46.772099 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-19 06:32:46.773066 | orchestrator | ++ semver latest 7.0.0 2025-09-19 06:32:46.835853 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-19 06:32:46.835936 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 06:32:46.835951 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-19 06:32:46.835964 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-19 06:32:46.928237 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 06:32:46.931668 | orchestrator | + source /opt/venv/bin/activate 2025-09-19 06:32:46.933006 | orchestrator | ++ deactivate nondestructive 2025-09-19 06:32:46.933031 | orchestrator | ++ '[' -n '' ']' 2025-09-19 06:32:46.933051 | orchestrator | ++ '[' -n '' ']' 2025-09-19 06:32:46.933069 | orchestrator | ++ hash -r 2025-09-19 06:32:46.933080 | orchestrator | ++ '[' -n '' ']' 2025-09-19 06:32:46.933091 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-19 06:32:46.933102 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-19 06:32:46.933113 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-19 06:32:46.933125 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-19 06:32:46.933145 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-19 06:32:46.933163 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-19 06:32:46.933175 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-19 06:32:46.933191 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 06:32:46.933203 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 06:32:46.933214 | orchestrator | ++ export PATH 2025-09-19 06:32:46.933229 | orchestrator | ++ '[' -n '' ']' 2025-09-19 06:32:46.933240 | orchestrator | ++ '[' -z '' ']' 2025-09-19 06:32:46.933258 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-19 06:32:46.933269 | orchestrator | ++ PS1='(venv) ' 2025-09-19 06:32:46.933280 | orchestrator | ++ export PS1 2025-09-19 06:32:46.933291 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-19 06:32:46.933302 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-19 06:32:46.933316 | orchestrator | ++ hash -r 2025-09-19 06:32:46.933616 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-19 06:32:48.188604 | orchestrator | 2025-09-19 06:32:48.188714 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-19 06:32:48.188731 | orchestrator | 2025-09-19 06:32:48.188743 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 06:32:48.791172 | orchestrator | ok: [testbed-manager] 2025-09-19 06:32:48.791280 | orchestrator | 2025-09-19 06:32:48.791294 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-19 06:32:49.822640 | orchestrator | changed: [testbed-manager] 2025-09-19 06:32:49.822740 | orchestrator | 2025-09-19 06:32:49.822756 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-19 06:32:49.822768 | orchestrator | 2025-09-19 06:32:49.822778 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:32:52.281687 | orchestrator | ok: [testbed-manager] 2025-09-19 06:32:52.281861 | orchestrator | 2025-09-19 06:32:52.281897 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-19 06:32:52.341625 | orchestrator | ok: [testbed-manager] 2025-09-19 06:32:52.341713 | orchestrator | 2025-09-19 06:32:52.341732 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-19 06:32:52.802649 | orchestrator | changed: [testbed-manager] 2025-09-19 06:32:52.802745 | orchestrator | 2025-09-19 06:32:52.802761 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-19 06:32:52.839075 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:32:52.839175 | orchestrator | 2025-09-19 06:32:52.839191 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-19 06:32:53.208691 | orchestrator | changed: [testbed-manager] 2025-09-19 06:32:53.208843 | orchestrator | 2025-09-19 06:32:53.208861 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-19 06:32:53.266294 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:32:53.266341 | orchestrator | 2025-09-19 06:32:53.266354 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-19 06:32:53.609626 | orchestrator | ok: [testbed-manager] 2025-09-19 06:32:53.609714 | orchestrator | 2025-09-19 06:32:53.609725 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-19 06:32:53.745835 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:32:53.745930 | orchestrator | 2025-09-19 06:32:53.745944 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-19 06:32:53.745955 | orchestrator | 2025-09-19 06:32:53.745968 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:32:55.527744 | orchestrator | ok: [testbed-manager] 2025-09-19 06:32:55.527927 | orchestrator | 2025-09-19 06:32:55.527946 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-19 06:32:55.641422 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-19 06:32:55.641512 | orchestrator | 2025-09-19 06:32:55.641527 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-19 06:32:55.700160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-19 06:32:55.700238 | orchestrator | 2025-09-19 06:32:55.700252 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-19 06:32:56.843742 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-19 06:32:56.843843 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-19 06:32:56.843853 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-19 06:32:56.843860 | orchestrator | 2025-09-19 06:32:56.843867 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-19 06:32:58.729196 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-19 06:32:58.729280 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-19 06:32:58.729291 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-19 06:32:58.729298 | orchestrator | 2025-09-19 06:32:58.729306 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-19 06:32:59.386883 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 06:32:59.387006 | orchestrator | changed: [testbed-manager] 2025-09-19 06:32:59.387031 | orchestrator | 2025-09-19 06:32:59.387051 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-19 06:33:00.060911 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 06:33:00.061031 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:00.061049 | orchestrator | 2025-09-19 06:33:00.061062 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-19 06:33:00.124650 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:33:00.124725 | orchestrator | 2025-09-19 06:33:00.124733 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-19 06:33:00.528370 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:00.528480 | orchestrator | 2025-09-19 06:33:00.528505 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-19 06:33:00.623202 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-19 06:33:00.623293 | orchestrator | 2025-09-19 06:33:00.623307 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-19 06:33:01.703216 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:01.703320 | orchestrator | 2025-09-19 06:33:01.703338 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-19 06:33:02.500601 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:02.500670 | orchestrator | 2025-09-19 06:33:02.500678 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-19 06:33:13.870344 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:13.870452 | orchestrator | 2025-09-19 06:33:13.870468 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-19 06:33:13.923187 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:33:13.923272 | orchestrator | 2025-09-19 06:33:13.923285 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-19 06:33:13.923295 | orchestrator | 2025-09-19 06:33:13.923303 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:33:15.716338 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:15.716433 | orchestrator | 2025-09-19 06:33:15.716481 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-19 06:33:15.822656 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-19 06:33:15.822740 | orchestrator | 2025-09-19 06:33:15.822754 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-19 06:33:15.876680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 06:33:15.876769 | orchestrator | 2025-09-19 06:33:15.876813 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-19 06:33:18.468343 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:18.468446 | orchestrator | 2025-09-19 06:33:18.468463 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-19 06:33:18.523185 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:18.523266 | orchestrator | 2025-09-19 06:33:18.523283 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-19 06:33:18.665637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-19 06:33:18.665706 | orchestrator | 2025-09-19 06:33:18.665714 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-19 06:33:21.585358 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-19 06:33:21.585466 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-19 06:33:21.585483 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-19 06:33:21.585496 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-19 06:33:21.585509 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-19 06:33:21.585521 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-19 06:33:21.585533 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-19 06:33:21.585545 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-19 06:33:21.585558 | orchestrator | 2025-09-19 06:33:21.585571 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-19 06:33:22.242179 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:22.242282 | orchestrator | 2025-09-19 06:33:22.242297 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-19 06:33:22.876618 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:22.876720 | orchestrator | 2025-09-19 06:33:22.876736 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-19 06:33:22.958525 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-19 06:33:22.958622 | orchestrator | 2025-09-19 06:33:22.958636 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-19 06:33:24.217229 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-19 06:33:24.217328 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-19 06:33:24.217343 | orchestrator | 2025-09-19 06:33:24.217356 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-19 06:33:24.840078 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:24.840176 | orchestrator | 2025-09-19 06:33:24.840191 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-19 06:33:24.899013 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:33:24.899097 | orchestrator | 2025-09-19 06:33:24.899111 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-19 06:33:24.979420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-19 06:33:24.979509 | orchestrator | 2025-09-19 06:33:24.979525 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-19 06:33:25.619900 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:25.620006 | orchestrator | 2025-09-19 06:33:25.620024 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-19 06:33:25.680690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-19 06:33:25.680971 | orchestrator | 2025-09-19 06:33:25.681004 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-19 06:33:27.084096 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 06:33:27.084197 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 06:33:27.084212 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:27.084226 | orchestrator | 2025-09-19 06:33:27.084237 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-19 06:33:27.713281 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:27.713412 | orchestrator | 2025-09-19 06:33:27.713447 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-19 06:33:27.771905 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:33:27.771998 | orchestrator | 2025-09-19 06:33:27.772015 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-19 06:33:27.877303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-19 06:33:27.877398 | orchestrator | 2025-09-19 06:33:27.877413 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-19 06:33:28.410588 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:28.410689 | orchestrator | 2025-09-19 06:33:28.410706 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-19 06:33:28.837448 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:28.837575 | orchestrator | 2025-09-19 06:33:28.837604 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-19 06:33:30.088672 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-19 06:33:30.088823 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-19 06:33:30.088840 | orchestrator | 2025-09-19 06:33:30.088854 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-19 06:33:30.727396 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:30.727493 | orchestrator | 2025-09-19 06:33:30.727509 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-19 06:33:31.146515 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:31.146612 | orchestrator | 2025-09-19 06:33:31.146628 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-19 06:33:31.525742 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:31.525869 | orchestrator | 2025-09-19 06:33:31.525882 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-19 06:33:31.577318 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:33:31.577394 | orchestrator | 2025-09-19 06:33:31.577407 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-19 06:33:31.645216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-19 06:33:31.645299 | orchestrator | 2025-09-19 06:33:31.645314 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-19 06:33:31.681951 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:31.681978 | orchestrator | 2025-09-19 06:33:31.681990 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-19 06:33:33.729714 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-19 06:33:33.729855 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-19 06:33:33.729873 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-19 06:33:33.729885 | orchestrator | 2025-09-19 06:33:33.729898 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-19 06:33:34.449856 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:34.449933 | orchestrator | 2025-09-19 06:33:34.449945 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-19 06:33:35.189390 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:35.189478 | orchestrator | 2025-09-19 06:33:35.189490 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-19 06:33:35.920227 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:35.920293 | orchestrator | 2025-09-19 06:33:35.920300 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-19 06:33:36.001073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-19 06:33:36.001190 | orchestrator | 2025-09-19 06:33:36.001214 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-19 06:33:36.048150 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:36.048259 | orchestrator | 2025-09-19 06:33:36.048274 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-19 06:33:36.777585 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-19 06:33:36.777693 | orchestrator | 2025-09-19 06:33:36.777718 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-19 06:33:36.852997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-19 06:33:36.853082 | orchestrator | 2025-09-19 06:33:36.853095 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-19 06:33:37.592576 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:37.592676 | orchestrator | 2025-09-19 06:33:37.592692 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-19 06:33:38.181063 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:38.181162 | orchestrator | 2025-09-19 06:33:38.181178 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-19 06:33:38.240018 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:33:38.240134 | orchestrator | 2025-09-19 06:33:38.240160 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-19 06:33:38.303426 | orchestrator | ok: [testbed-manager] 2025-09-19 06:33:38.303520 | orchestrator | 2025-09-19 06:33:38.303535 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-19 06:33:39.166413 | orchestrator | changed: [testbed-manager] 2025-09-19 06:33:39.166495 | orchestrator | 2025-09-19 06:33:39.166503 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-19 06:34:46.681231 | orchestrator | changed: [testbed-manager] 2025-09-19 06:34:46.681352 | orchestrator | 2025-09-19 06:34:46.681370 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-19 06:34:47.692283 | orchestrator | ok: [testbed-manager] 2025-09-19 06:34:47.692384 | orchestrator | 2025-09-19 06:34:47.692401 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-19 06:34:47.749466 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:34:47.749567 | orchestrator | 2025-09-19 06:34:47.749586 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-19 06:34:50.404095 | orchestrator | changed: [testbed-manager] 2025-09-19 06:34:50.404198 | orchestrator | 2025-09-19 06:34:50.404217 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-19 06:34:50.460986 | orchestrator | ok: [testbed-manager] 2025-09-19 06:34:50.461052 | orchestrator | 2025-09-19 06:34:50.461066 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-19 06:34:50.461078 | orchestrator | 2025-09-19 06:34:50.461090 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-19 06:34:50.509830 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:34:50.509913 | orchestrator | 2025-09-19 06:34:50.509930 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-19 06:35:50.561219 | orchestrator | Pausing for 60 seconds 2025-09-19 06:35:50.561334 | orchestrator | changed: [testbed-manager] 2025-09-19 06:35:50.561351 | orchestrator | 2025-09-19 06:35:50.561364 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-19 06:35:54.193631 | orchestrator | changed: [testbed-manager] 2025-09-19 06:35:54.193836 | orchestrator | 2025-09-19 06:35:54.193868 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-19 06:36:35.872815 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-19 06:36:35.872963 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-19 06:36:35.872983 | orchestrator | changed: [testbed-manager] 2025-09-19 06:36:35.873027 | orchestrator | 2025-09-19 06:36:35.873039 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-19 06:36:45.791282 | orchestrator | changed: [testbed-manager] 2025-09-19 06:36:45.791393 | orchestrator | 2025-09-19 06:36:45.791411 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-19 06:36:45.891287 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-19 06:36:45.891381 | orchestrator | 2025-09-19 06:36:45.891396 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-19 06:36:45.891408 | orchestrator | 2025-09-19 06:36:45.891420 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-19 06:36:45.952663 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:36:45.952790 | orchestrator | 2025-09-19 06:36:45.952806 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:36:45.952820 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-19 06:36:45.952831 | orchestrator | 2025-09-19 06:36:46.065792 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 06:36:46.065892 | orchestrator | + deactivate 2025-09-19 06:36:46.065909 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-19 06:36:46.065922 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 06:36:46.065933 | orchestrator | + export PATH 2025-09-19 06:36:46.065945 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-19 06:36:46.065957 | orchestrator | + '[' -n '' ']' 2025-09-19 06:36:46.065969 | orchestrator | + hash -r 2025-09-19 06:36:46.066002 | orchestrator | + '[' -n '' ']' 2025-09-19 06:36:46.066014 | orchestrator | + unset VIRTUAL_ENV 2025-09-19 06:36:46.066083 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-19 06:36:46.066095 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-19 06:36:46.066107 | orchestrator | + unset -f deactivate 2025-09-19 06:36:46.066119 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-19 06:36:46.075269 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 06:36:46.075333 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-19 06:36:46.075348 | orchestrator | + local max_attempts=60 2025-09-19 06:36:46.075362 | orchestrator | + local name=ceph-ansible 2025-09-19 06:36:46.075373 | orchestrator | + local attempt_num=1 2025-09-19 06:36:46.075947 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:36:46.112055 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:36:46.112148 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-19 06:36:46.112169 | orchestrator | + local max_attempts=60 2025-09-19 06:36:46.112184 | orchestrator | + local name=kolla-ansible 2025-09-19 06:36:46.112195 | orchestrator | + local attempt_num=1 2025-09-19 06:36:46.112975 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-19 06:36:46.152445 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:36:46.152512 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-19 06:36:46.152522 | orchestrator | + local max_attempts=60 2025-09-19 06:36:46.152532 | orchestrator | + local name=osism-ansible 2025-09-19 06:36:46.152542 | orchestrator | + local attempt_num=1 2025-09-19 06:36:46.153801 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-19 06:36:46.196808 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:36:46.196909 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-19 06:36:46.196931 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-19 06:36:46.950240 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-19 06:36:47.195988 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-19 06:36:47.196060 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-19 06:36:47.196068 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-19 06:36:47.196090 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-19 06:36:47.196097 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-19 06:36:47.196110 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-19 06:36:47.196116 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-19 06:36:47.196120 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-09-19 06:36:47.196125 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-19 06:36:47.196130 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-19 06:36:47.196134 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-19 06:36:47.196139 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-19 06:36:47.196143 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-19 06:36:47.196148 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-19 06:36:47.196152 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-19 06:36:47.196157 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-19 06:36:47.203241 | orchestrator | ++ semver latest 7.0.0 2025-09-19 06:36:47.246162 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-19 06:36:47.246275 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 06:36:47.246299 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-19 06:36:47.249819 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-19 06:36:59.419936 | orchestrator | 2025-09-19 06:36:59 | INFO  | Task 9d34ed90-e27f-4189-afd8-f896876ebcd6 (resolvconf) was prepared for execution. 2025-09-19 06:36:59.420048 | orchestrator | 2025-09-19 06:36:59 | INFO  | It takes a moment until task 9d34ed90-e27f-4189-afd8-f896876ebcd6 (resolvconf) has been started and output is visible here. 2025-09-19 06:37:14.021653 | orchestrator | 2025-09-19 06:37:14.021770 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-19 06:37:14.021782 | orchestrator | 2025-09-19 06:37:14.021789 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:37:14.021817 | orchestrator | Friday 19 September 2025 06:37:03 +0000 (0:00:00.111) 0:00:00.111 ****** 2025-09-19 06:37:14.021825 | orchestrator | ok: [testbed-manager] 2025-09-19 06:37:14.021834 | orchestrator | 2025-09-19 06:37:14.021841 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-19 06:37:14.021849 | orchestrator | Friday 19 September 2025 06:37:08 +0000 (0:00:04.792) 0:00:04.903 ****** 2025-09-19 06:37:14.021855 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:37:14.021863 | orchestrator | 2025-09-19 06:37:14.021869 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-19 06:37:14.021876 | orchestrator | Friday 19 September 2025 06:37:08 +0000 (0:00:00.069) 0:00:04.972 ****** 2025-09-19 06:37:14.021884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-19 06:37:14.021892 | orchestrator | 2025-09-19 06:37:14.021898 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-19 06:37:14.021905 | orchestrator | Friday 19 September 2025 06:37:08 +0000 (0:00:00.080) 0:00:05.053 ****** 2025-09-19 06:37:14.021912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 06:37:14.021919 | orchestrator | 2025-09-19 06:37:14.021925 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-19 06:37:14.021932 | orchestrator | Friday 19 September 2025 06:37:08 +0000 (0:00:00.079) 0:00:05.132 ****** 2025-09-19 06:37:14.021939 | orchestrator | ok: [testbed-manager] 2025-09-19 06:37:14.021946 | orchestrator | 2025-09-19 06:37:14.021952 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-19 06:37:14.021959 | orchestrator | Friday 19 September 2025 06:37:09 +0000 (0:00:01.114) 0:00:06.247 ****** 2025-09-19 06:37:14.021966 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:37:14.021972 | orchestrator | 2025-09-19 06:37:14.021979 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-19 06:37:14.021986 | orchestrator | Friday 19 September 2025 06:37:09 +0000 (0:00:00.056) 0:00:06.303 ****** 2025-09-19 06:37:14.021992 | orchestrator | ok: [testbed-manager] 2025-09-19 06:37:14.021999 | orchestrator | 2025-09-19 06:37:14.022006 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-19 06:37:14.022013 | orchestrator | Friday 19 September 2025 06:37:09 +0000 (0:00:00.490) 0:00:06.793 ****** 2025-09-19 06:37:14.022063 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:37:14.022070 | orchestrator | 2025-09-19 06:37:14.022077 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-19 06:37:14.022085 | orchestrator | Friday 19 September 2025 06:37:09 +0000 (0:00:00.084) 0:00:06.878 ****** 2025-09-19 06:37:14.022091 | orchestrator | changed: [testbed-manager] 2025-09-19 06:37:14.022098 | orchestrator | 2025-09-19 06:37:14.022105 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-19 06:37:14.022111 | orchestrator | Friday 19 September 2025 06:37:10 +0000 (0:00:00.515) 0:00:07.393 ****** 2025-09-19 06:37:14.022118 | orchestrator | changed: [testbed-manager] 2025-09-19 06:37:14.022124 | orchestrator | 2025-09-19 06:37:14.022131 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-19 06:37:14.022138 | orchestrator | Friday 19 September 2025 06:37:11 +0000 (0:00:01.073) 0:00:08.467 ****** 2025-09-19 06:37:14.022144 | orchestrator | ok: [testbed-manager] 2025-09-19 06:37:14.022151 | orchestrator | 2025-09-19 06:37:14.022158 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-19 06:37:14.022164 | orchestrator | Friday 19 September 2025 06:37:12 +0000 (0:00:00.968) 0:00:09.435 ****** 2025-09-19 06:37:14.022179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-19 06:37:14.022192 | orchestrator | 2025-09-19 06:37:14.022199 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-19 06:37:14.022207 | orchestrator | Friday 19 September 2025 06:37:12 +0000 (0:00:00.091) 0:00:09.527 ****** 2025-09-19 06:37:14.022215 | orchestrator | changed: [testbed-manager] 2025-09-19 06:37:14.022223 | orchestrator | 2025-09-19 06:37:14.022232 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:37:14.022241 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 06:37:14.022249 | orchestrator | 2025-09-19 06:37:14.022257 | orchestrator | 2025-09-19 06:37:14.022264 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:37:14.022272 | orchestrator | Friday 19 September 2025 06:37:13 +0000 (0:00:01.147) 0:00:10.675 ****** 2025-09-19 06:37:14.022280 | orchestrator | =============================================================================== 2025-09-19 06:37:14.022289 | orchestrator | Gathering Facts --------------------------------------------------------- 4.79s 2025-09-19 06:37:14.022297 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.15s 2025-09-19 06:37:14.022305 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.11s 2025-09-19 06:37:14.022313 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.07s 2025-09-19 06:37:14.022320 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.97s 2025-09-19 06:37:14.022328 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-09-19 06:37:14.022351 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-09-19 06:37:14.022359 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-09-19 06:37:14.022367 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-19 06:37:14.022375 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-09-19 06:37:14.022383 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-09-19 06:37:14.022391 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-19 06:37:14.022399 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-09-19 06:37:14.321178 | orchestrator | + osism apply sshconfig 2025-09-19 06:37:26.318668 | orchestrator | 2025-09-19 06:37:26 | INFO  | Task d37e3f35-d3a4-4518-add1-b2259cfc8114 (sshconfig) was prepared for execution. 2025-09-19 06:37:26.318837 | orchestrator | 2025-09-19 06:37:26 | INFO  | It takes a moment until task d37e3f35-d3a4-4518-add1-b2259cfc8114 (sshconfig) has been started and output is visible here. 2025-09-19 06:37:37.876043 | orchestrator | 2025-09-19 06:37:37.876159 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-19 06:37:37.876176 | orchestrator | 2025-09-19 06:37:37.876189 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-19 06:37:37.876200 | orchestrator | Friday 19 September 2025 06:37:30 +0000 (0:00:00.165) 0:00:00.165 ****** 2025-09-19 06:37:37.876211 | orchestrator | ok: [testbed-manager] 2025-09-19 06:37:37.876224 | orchestrator | 2025-09-19 06:37:37.876235 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-19 06:37:37.876246 | orchestrator | Friday 19 September 2025 06:37:30 +0000 (0:00:00.539) 0:00:00.704 ****** 2025-09-19 06:37:37.876257 | orchestrator | changed: [testbed-manager] 2025-09-19 06:37:37.876269 | orchestrator | 2025-09-19 06:37:37.876280 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-19 06:37:37.876292 | orchestrator | Friday 19 September 2025 06:37:31 +0000 (0:00:00.482) 0:00:01.186 ****** 2025-09-19 06:37:37.876303 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-19 06:37:37.876314 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-19 06:37:37.876352 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-19 06:37:37.876363 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-19 06:37:37.876374 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-19 06:37:37.876403 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-19 06:37:37.876414 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-19 06:37:37.876425 | orchestrator | 2025-09-19 06:37:37.876436 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-19 06:37:37.876447 | orchestrator | Friday 19 September 2025 06:37:36 +0000 (0:00:05.734) 0:00:06.921 ****** 2025-09-19 06:37:37.876458 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:37:37.876469 | orchestrator | 2025-09-19 06:37:37.876480 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-19 06:37:37.876491 | orchestrator | Friday 19 September 2025 06:37:37 +0000 (0:00:00.070) 0:00:06.991 ****** 2025-09-19 06:37:37.876501 | orchestrator | changed: [testbed-manager] 2025-09-19 06:37:37.876513 | orchestrator | 2025-09-19 06:37:37.876523 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:37:37.876536 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:37:37.876548 | orchestrator | 2025-09-19 06:37:37.876559 | orchestrator | 2025-09-19 06:37:37.876570 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:37:37.876581 | orchestrator | Friday 19 September 2025 06:37:37 +0000 (0:00:00.580) 0:00:07.572 ****** 2025-09-19 06:37:37.876593 | orchestrator | =============================================================================== 2025-09-19 06:37:37.876606 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.73s 2025-09-19 06:37:37.876618 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-09-19 06:37:37.876631 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2025-09-19 06:37:37.876644 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.48s 2025-09-19 06:37:37.876656 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-09-19 06:37:38.163176 | orchestrator | + osism apply known-hosts 2025-09-19 06:37:50.065560 | orchestrator | 2025-09-19 06:37:50 | INFO  | Task 349e49eb-bb58-427d-9a4a-6b2ea9c3dd86 (known-hosts) was prepared for execution. 2025-09-19 06:37:50.065672 | orchestrator | 2025-09-19 06:37:50 | INFO  | It takes a moment until task 349e49eb-bb58-427d-9a4a-6b2ea9c3dd86 (known-hosts) has been started and output is visible here. 2025-09-19 06:38:07.211909 | orchestrator | 2025-09-19 06:38:07.212020 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-19 06:38:07.212036 | orchestrator | 2025-09-19 06:38:07.212048 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-19 06:38:07.212060 | orchestrator | Friday 19 September 2025 06:37:54 +0000 (0:00:00.182) 0:00:00.182 ****** 2025-09-19 06:38:07.212072 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-19 06:38:07.212083 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-19 06:38:07.212094 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-19 06:38:07.212105 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-19 06:38:07.212115 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-19 06:38:07.212126 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-19 06:38:07.212137 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-19 06:38:07.212147 | orchestrator | 2025-09-19 06:38:07.212158 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-19 06:38:07.212170 | orchestrator | Friday 19 September 2025 06:38:00 +0000 (0:00:06.010) 0:00:06.193 ****** 2025-09-19 06:38:07.212206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-19 06:38:07.212220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-19 06:38:07.212230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-19 06:38:07.212241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-19 06:38:07.212252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-19 06:38:07.212273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-19 06:38:07.212284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-19 06:38:07.212295 | orchestrator | 2025-09-19 06:38:07.212306 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:38:07.212317 | orchestrator | Friday 19 September 2025 06:38:00 +0000 (0:00:00.180) 0:00:06.373 ****** 2025-09-19 06:38:07.212328 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHEC/3kTjaV/WYQoyCCjptVM/oqVTcTYWPecfMnmgpVO) 2025-09-19 06:38:07.212344 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQD5V9XopdS80MYDJ5Rsvi0ADMYcdOYyrXfy/L0r+LvlFUxRUcQ3WShzCnnYJI+4mtqNmPeozQmYn6mASjom1Ne93djnwVU4wmOg7t54sri78w0JtcLkY3XYVwfTWkTErkyKF/Gv9sj0mOwH8WK59LPPp5pGEV0z5r3q2LrYAA1cu4bhvoMGe/iIhNt4blz82Jy3YpIFzJ5AdZ1X2HWtG1hY1j78L5wekF/6ev0s6Vs/cTnqiXJnHtk3UdJS/8LcbFKk2bd7A1aySnHmLgwdVXnxNWQzi08bjuVqhddBtoFHstv2HbSbvrRDFyE3loz/mAM4i9x6Mj94Zsbo4rX2Lefx4IX9KtQD0z91RpaRZCDQjOvIszJK+FdljCrOu4KulT2W9tLNmyYHc35ICY3mT8imbsZIZ2zLt98+r9E6kGQjHS2tBYZt30N3SGpwm68RuYUubOvCCUI5DeWHiqsy6MsDzbCx/HvUBCfuHfvswwdhbk1FTOMw+lxBqaOkn0Yj0=) 2025-09-19 06:38:07.212364 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNVZnJCrni7qGhE16uY6cUYlxILh98Y8gdUeM8BXCYdzkIQy/XDaglBGK209g30sT9WLn4y9h13uFop3HQ8d3rI=) 2025-09-19 06:38:07.212384 | orchestrator | 2025-09-19 06:38:07.212404 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:38:07.212423 | orchestrator | Friday 19 September 2025 06:38:01 +0000 (0:00:01.371) 0:00:07.745 ****** 2025-09-19 06:38:07.212441 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOQoBXR+OGFnSsL0E8xaHs96BMAHk49AnAvIx1Hjf7Ou) 2025-09-19 06:38:07.212499 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDV54IQuxTWGFg+wGdAQ5DJ6p5ebHxea57atvQcsne0EyFf1PlgF3xedJWqp0mQv1mQ0hJkn8y7TyetdqfGCsvT7BDL42ZCMOxNHC/EntS8gJYWb+sGMN/RuxIf33XAoe2lBEak8Vi0MUugxut89b5MEAce4QHOXUGi2Al4cQOjG0W8cRF81SB+euT0tfkdJRiFk43j9yv+8dlPmzC5zYM49SrgNCfjtLwPf6x6ze0LGckxADV6OTVxSVADLBi6KpbN2jwEygNCyIycJfVom5gXAFjdCTf0wAF5MQIcWRofXfI20SRLVSsVAqg7P1+Sn3HS+4YspJEwYj2QsmFZnD+SjOo2kDKh6pAXIod4cm/xbyKwgaAyaQHg8AlBkU4lYnpgasourDp3DOukBrZXk8IkJSUVtDZVWU3Lu0Ezwc8rv/OZoujj2lu61zKDswDpn+l4J8258xkniCvYKuPRoYyxISTuJHEDMEs9oh3P4pG7/yjX5eybvDi6piM/Ys4oBPk=) 2025-09-19 06:38:07.212522 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA4m8zkd3ehSv5/dkIWjMLJzapbxOQ2c4NXttwXo7Z/4QQ4lZFviXHi0tCdxwQvfFlgbtZcQ2lfdammrk0sCSWI=) 2025-09-19 06:38:07.212545 | orchestrator | 2025-09-19 06:38:07.212556 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:38:07.212567 | orchestrator | Friday 19 September 2025 06:38:02 +0000 (0:00:01.167) 0:00:08.913 ****** 2025-09-19 06:38:07.212578 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGCHZDBLLqVCknEKPXvHyEpzWdzOS9pFq8nf5tLa+0U8y4ng4c21jZEBsbmLoHLLUptGklUnn2uYpuqw9CuVB7c=) 2025-09-19 06:38:07.212590 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgpmO6Lh249iQUuJWWSK094a5OWQIAQ/7yj/yu0sIhiOswkyfFi9IlQTonGE3QnuUpBItGlqFvqZbsA0qPT5ma3TkOyUTaLSp7F3maSFNGvhYVlSPvhXoGUppbj1bTQD7deJFTDFfjAkBsLIVHFJdDFTaR5IL823CA6gR+1LZuqDn4joZnSfSfVembYyEI+ZiEeCKL+UJYvT+klVArOMrEaVNf7idyAU87nxoM0zKyx1/OwgSI9oE8NbNnlvIvhMGe/M6SNv4RMOBLdkiFo+6E4YuDz/NJBNpVVjeTaZ6j9VXqaLqOHsuFNQ61vDDJQtsw3CNpodyoSQnXY6J2z7/FOpqzD9EOhsfV0tFRMWaVVGh4d4sWJt0DUehvlWU8kUcvz2WcqAyd9HpAV78PTLNz25Vt5zctlV9dSFVCNp89oPzENkZpDTGRBRgvc3B6A/S2feZvr08fawHjl3YgPWGJyvDymAXb3NNJx5DVfa+t8s4PDwMExEGluit67e2dcGc=) 2025-09-19 06:38:07.212601 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHnUQZLpdPKY/lKw4Jpb9EMPYe4vgklkeeRzafeknWRg) 2025-09-19 06:38:07.212612 | orchestrator | 2025-09-19 06:38:07.212624 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:38:07.212634 | orchestrator | Friday 19 September 2025 06:38:03 +0000 (0:00:01.106) 0:00:10.020 ****** 2025-09-19 06:38:07.212747 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaOfZ3uLYX6rm6AuaM70kLjX6mTpf9uHSAMpzcyZKr+Sb1Q+SHtbivO732UPrQ5dhJaztrnHtmiojRFW5qzKADgpIPBMzko06M2dhniGRQC9Jea6GRZs23HAAVq8Ym95T2O2CfV90nEjWwYVkhl3vs24YD+pXmFJ0OowLr++M8eQ0DhqcxIorWw9FYIyVYenuoY9ajgSCg1/AQ3p7HR075td/PktacZroaoDWx/89eRAeFAZ4O7SCI8yXVmOqQR41up+ccREF7RxkES4LVfUTSQZ/z/T/W+Yom4eenxh37AvqZYfyPAHPDmJqKz6aj1xSNhT8WD1acwgxcFlO1islft6rmvn2Tm43wHbIO8pVAPgtCUQ7DDdPu6u4HOJoLmB9sALUmxXk8TVPPgsBSeUGBw4SnX0u88wosKHHBPVMP3Ml9NCQK5HaIuuc8YzYpwB4F8E/EpVjPwI8W05UmgXcV0hoAhp1rKmC+N+copx3SJ7NfHUFh5esiMg5RwAdLtlc=) 2025-09-19 06:38:07.212761 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBTrcb4I2dzKgeMcHdtF8niVIVDkUSqrUU8hUPskPkiuJEhpLbXHD1vjamPMEhCIvg5XB2+TAppAkFoEqx7xWxA=) 2025-09-19 06:38:07.212773 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAxaL6pWiPgMsfOQKMb9/hU4kfQwG4wJopL8PU3wX03Q) 2025-09-19 06:38:07.212784 | orchestrator | 2025-09-19 06:38:07.212795 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:38:07.212806 | orchestrator | Friday 19 September 2025 06:38:04 +0000 (0:00:01.107) 0:00:11.128 ****** 2025-09-19 06:38:07.212817 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFxe0LpnH5jBFQF90kk5jUwr6Gj0sEcMspO1t2upccBRo1OFgj4e1yl1esWUL5O58d54MsYhYeGxLEvWWG6Gy3NrtOVV8MBx1rU9a36dwmvA7m/B4y2H1C+fCZd1dg0n68CfdY703pgjfGuisucJArRAgs832pK2OnkVzgjFgmOfHLKTG2TFKNhnEaSQVNRPr3Utd7thQ8n0Q70MhFaojxRjetvlHTPY4/rqySBGyaWXf1LVfobHvIux5JwruQlxyihJVvNrOnhRdReVFg7sA6Xt8WGezLuRsWmLkFf8eKkIZ6JAI9e2zz1oKth4Rs8O2V8QAYeCQdqTNJk+rjDQ7jDnfYoB1qk0Q8ekxwVe/5jDuR+HK3AkiwE/PMvtVcCFj0xeApt24hhfr8fmO0QVDhT/oXgbXG0N27vNT3ri/17wYKcLj+z1SLEryoR3lagqFfGJwbOFPp784IIURhSp65CXFAICq+1hE/Vc+KbMfyQvKCmkUuJkg/lUCJuUjc2SM=) 2025-09-19 06:38:07.212828 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKHvumOugZgxeGPUtdrERuygqEt98zktuieKOF6p9eX) 2025-09-19 06:38:07.212839 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOGXVKjCVfXRnuaxCa/92H1kJXvBNSWwSu7Uh3JbsAX8lqlcIp90VTwzs7BDGxVImRPSDDmqG81ymZzboo2kKPI=) 2025-09-19 06:38:07.212858 | orchestrator | 2025-09-19 06:38:07.212869 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:38:07.212880 | orchestrator | Friday 19 September 2025 06:38:06 +0000 (0:00:01.111) 0:00:12.239 ****** 2025-09-19 06:38:07.212901 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSJEDKutjSMPa0EKfE4cBnmsz3KVWpbhl1ci7cnRFsSUaPyWdULw9gtoG+maiSLs+JGD/2TxnP+/kRqaHZBbDDSUTeW/m0IeZ9s+br177p583ZIJ+P7wg1mScEAMefbdF2934shM4rGXpKgZi1OxTF1mtqIwsiuAmK9Or6P9P7l7DERtERrvjbe8liJgJp9KWk0XRwxbfCbVDPbTR1fr+DNsfaesTdmr/KIouUPYPsi6/JRS20nsaNWhi2RLbgrYdgjurhmkQmazftCmB0RSYxq3vCqPJ/f8s2koygKnYMjGIyAI1ny8cw9AQTNQsm2+Coadvf+U9AT6Ts6EQh0R3/AKtsR0k7dYeGRPudjhE4rIPcyOFDLLqI4SWZwUMr4Iq/jdIe0brpS3wInxmXwKFK3kyHYihESms6LrOkxGshdg42DAG+GizRngqqjyk1a/I3OwnF+U5PXv87MTmlZDr7N92w2cpRnUPNvghumRI9it1MLFhoZsjJEgBcTYFN0Gs=) 2025-09-19 06:38:17.929940 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC0CrQB+L1dZbegY3eWhzFrWDxDaUoA2Te0F8gKkpTfOjINk1Rf5hve2dxaGxphcDtjC5nD+45RnPnqR7bU0thw=) 2025-09-19 06:38:17.930118 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFjHeqCdDeCRJOldQE40LxSQTo2BNc+kTtkyInpI/lbN) 2025-09-19 06:38:17.930175 | orchestrator | 2025-09-19 06:38:17.930190 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:38:17.930203 | orchestrator | Friday 19 September 2025 06:38:07 +0000 (0:00:01.133) 0:00:13.373 ****** 2025-09-19 06:38:17.930217 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv3wVSwbItcXJZJgcx5GgWOMLLvK+zgvVRuSdLGa7ZOKMZH53prN/32+jE31SWb5bgZsuMrvyuVVO1iQpKcUboaNOVIMu+ID6x+xbCH3js8XEr6YvTyTcWXoOTXDQ4JVlSDyUjXfSgbemGohGHaJAsDq41iGrUOKWNg0G8jQnRq3zTEidla0QdfyBmHK9drzGJL2eJaJhhlubN8NljVlWBEbK4NYE+2jmoIAWqESAehNW4whdUvuVIPPmTnLnrXYCQznzvbJyuhW7HQ2ad8wEnNKpHUfwhYD9XulQ+a+1puUCl1D65R6NoAH6/KHxZa/c32Ie3gUUI8a86cQ37KX63IPISHVMDSdtQXrdBbq6X+NwnJwFnXdqfDx3nI8ZszB4/EJr3AcJR5rNdz6HQUtkZDvtkjLJBK/IGOLZCMvMeEzaN7JVOwBvecJMCMUn4cLerrXsDpyAk99VT+wCJdObZHdin4cxh1R6mxw9zCMO6tczLjJp+QeMH+3wL/KvQ1Es=) 2025-09-19 06:38:17.930232 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK4Aa+LxR1ZjNbfbycXzt70C/8XksYNn8oyYQvPQp757OL/JzToJiIS8Py8IbZ0HNI/335BqZnbPP1SOdcEV2lY=) 2025-09-19 06:38:17.930243 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJlheV+NK60eW9TGHmprW8meRYUJcLJz1u2av89Y56rY) 2025-09-19 06:38:17.930255 | orchestrator | 2025-09-19 06:38:17.930266 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-19 06:38:17.930279 | orchestrator | Friday 19 September 2025 06:38:08 +0000 (0:00:01.122) 0:00:14.496 ****** 2025-09-19 06:38:17.930291 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-19 06:38:17.930303 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-19 06:38:17.930314 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-19 06:38:17.930325 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-19 06:38:17.930336 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-19 06:38:17.930347 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-19 06:38:17.930358 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-19 06:38:17.930369 | orchestrator | 2025-09-19 06:38:17.930380 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-19 06:38:17.930412 | orchestrator | Friday 19 September 2025 06:38:13 +0000 (0:00:05.215) 0:00:19.711 ****** 2025-09-19 06:38:17.930424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-19 06:38:17.930437 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-19 06:38:17.930535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-19 06:38:17.930550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-19 06:38:17.930564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-19 06:38:17.930577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-19 06:38:17.930590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-19 06:38:17.930603 | orchestrator | 2025-09-19 06:38:17.930615 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:38:17.930629 | orchestrator | Friday 19 September 2025 06:38:13 +0000 (0:00:00.176) 0:00:19.888 ****** 2025-09-19 06:38:17.930641 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHEC/3kTjaV/WYQoyCCjptVM/oqVTcTYWPecfMnmgpVO) 2025-09-19 06:38:17.930683 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQD5V9XopdS80MYDJ5Rsvi0ADMYcdOYyrXfy/L0r+LvlFUxRUcQ3WShzCnnYJI+4mtqNmPeozQmYn6mASjom1Ne93djnwVU4wmOg7t54sri78w0JtcLkY3XYVwfTWkTErkyKF/Gv9sj0mOwH8WK59LPPp5pGEV0z5r3q2LrYAA1cu4bhvoMGe/iIhNt4blz82Jy3YpIFzJ5AdZ1X2HWtG1hY1j78L5wekF/6ev0s6Vs/cTnqiXJnHtk3UdJS/8LcbFKk2bd7A1aySnHmLgwdVXnxNWQzi08bjuVqhddBtoFHstv2HbSbvrRDFyE3loz/mAM4i9x6Mj94Zsbo4rX2Lefx4IX9KtQD0z91RpaRZCDQjOvIszJK+FdljCrOu4KulT2W9tLNmyYHc35ICY3mT8imbsZIZ2zLt98+r9E6kGQjHS2tBYZt30N3SGpwm68RuYUubOvCCUI5DeWHiqsy6MsDzbCx/HvUBCfuHfvswwdhbk1FTOMw+lxBqaOkn0Yj0=) 2025-09-19 06:38:17.930725 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNVZnJCrni7qGhE16uY6cUYlxILh98Y8gdUeM8BXCYdzkIQy/XDaglBGK209g30sT9WLn4y9h13uFop3HQ8d3rI=) 2025-09-19 06:38:17.930738 | orchestrator | 2025-09-19 06:38:17.930751 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:38:17.930764 | orchestrator | Friday 19 September 2025 06:38:14 +0000 (0:00:01.064) 0:00:20.953 ****** 2025-09-19 06:38:17.930777 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDV54IQuxTWGFg+wGdAQ5DJ6p5ebHxea57atvQcsne0EyFf1PlgF3xedJWqp0mQv1mQ0hJkn8y7TyetdqfGCsvT7BDL42ZCMOxNHC/EntS8gJYWb+sGMN/RuxIf33XAoe2lBEak8Vi0MUugxut89b5MEAce4QHOXUGi2Al4cQOjG0W8cRF81SB+euT0tfkdJRiFk43j9yv+8dlPmzC5zYM49SrgNCfjtLwPf6x6ze0LGckxADV6OTVxSVADLBi6KpbN2jwEygNCyIycJfVom5gXAFjdCTf0wAF5MQIcWRofXfI20SRLVSsVAqg7P1+Sn3HS+4YspJEwYj2QsmFZnD+SjOo2kDKh6pAXIod4cm/xbyKwgaAyaQHg8AlBkU4lYnpgasourDp3DOukBrZXk8IkJSUVtDZVWU3Lu0Ezwc8rv/OZoujj2lu61zKDswDpn+l4J8258xkniCvYKuPRoYyxISTuJHEDMEs9oh3P4pG7/yjX5eybvDi6piM/Ys4oBPk=) 2025-09-19 06:38:17.930789 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA4m8zkd3ehSv5/dkIWjMLJzapbxOQ2c4NXttwXo7Z/4QQ4lZFviXHi0tCdxwQvfFlgbtZcQ2lfdammrk0sCSWI=) 2025-09-19 06:38:17.930802 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOQoBXR+OGFnSsL0E8xaHs96BMAHk49AnAvIx1Hjf7Ou) 2025-09-19 06:38:17.930815 | orchestrator | 2025-09-19 06:38:17.930827 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:38:17.930838 | orchestrator | Friday 19 September 2025 06:38:15 +0000 (0:00:01.025) 0:00:21.978 ****** 2025-09-19 06:38:17.930860 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgpmO6Lh249iQUuJWWSK094a5OWQIAQ/7yj/yu0sIhiOswkyfFi9IlQTonGE3QnuUpBItGlqFvqZbsA0qPT5ma3TkOyUTaLSp7F3maSFNGvhYVlSPvhXoGUppbj1bTQD7deJFTDFfjAkBsLIVHFJdDFTaR5IL823CA6gR+1LZuqDn4joZnSfSfVembYyEI+ZiEeCKL+UJYvT+klVArOMrEaVNf7idyAU87nxoM0zKyx1/OwgSI9oE8NbNnlvIvhMGe/M6SNv4RMOBLdkiFo+6E4YuDz/NJBNpVVjeTaZ6j9VXqaLqOHsuFNQ61vDDJQtsw3CNpodyoSQnXY6J2z7/FOpqzD9EOhsfV0tFRMWaVVGh4d4sWJt0DUehvlWU8kUcvz2WcqAyd9HpAV78PTLNz25Vt5zctlV9dSFVCNp89oPzENkZpDTGRBRgvc3B6A/S2feZvr08fawHjl3YgPWGJyvDymAXb3NNJx5DVfa+t8s4PDwMExEGluit67e2dcGc=) 2025-09-19 06:38:17.930872 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGCHZDBLLqVCknEKPXvHyEpzWdzOS9pFq8nf5tLa+0U8y4ng4c21jZEBsbmLoHLLUptGklUnn2uYpuqw9CuVB7c=) 2025-09-19 06:38:17.930883 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHnUQZLpdPKY/lKw4Jpb9EMPYe4vgklkeeRzafeknWRg) 2025-09-19 06:38:17.930894 | orchestrator | 2025-09-19 06:38:17.930905 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:38:17.930916 | orchestrator | Friday 19 September 2025 06:38:16 +0000 (0:00:01.049) 0:00:23.027 ****** 2025-09-19 06:38:17.930938 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaOfZ3uLYX6rm6AuaM70kLjX6mTpf9uHSAMpzcyZKr+Sb1Q+SHtbivO732UPrQ5dhJaztrnHtmiojRFW5qzKADgpIPBMzko06M2dhniGRQC9Jea6GRZs23HAAVq8Ym95T2O2CfV90nEjWwYVkhl3vs24YD+pXmFJ0OowLr++M8eQ0DhqcxIorWw9FYIyVYenuoY9ajgSCg1/AQ3p7HR075td/PktacZroaoDWx/89eRAeFAZ4O7SCI8yXVmOqQR41up+ccREF7RxkES4LVfUTSQZ/z/T/W+Yom4eenxh37AvqZYfyPAHPDmJqKz6aj1xSNhT8WD1acwgxcFlO1islft6rmvn2Tm43wHbIO8pVAPgtCUQ7DDdPu6u4HOJoLmB9sALUmxXk8TVPPgsBSeUGBw4SnX0u88wosKHHBPVMP3Ml9NCQK5HaIuuc8YzYpwB4F8E/EpVjPwI8W05UmgXcV0hoAhp1rKmC+N+copx3SJ7NfHUFh5esiMg5RwAdLtlc=) 2025-09-19 06:38:17.930950 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBTrcb4I2dzKgeMcHdtF8niVIVDkUSqrUU8hUPskPkiuJEhpLbXHD1vjamPMEhCIvg5XB2+TAppAkFoEqx7xWxA=) 2025-09-19 06:38:17.930973 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAxaL6pWiPgMsfOQKMb9/hU4kfQwG4wJopL8PU3wX03Q) 2025-09-19 06:38:22.143571 | orchestrator | 2025-09-19 06:38:22.143732 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:38:22.143750 | orchestrator | Friday 19 September 2025 06:38:17 +0000 (0:00:01.064) 0:00:24.092 ****** 2025-09-19 06:38:22.143762 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOGXVKjCVfXRnuaxCa/92H1kJXvBNSWwSu7Uh3JbsAX8lqlcIp90VTwzs7BDGxVImRPSDDmqG81ymZzboo2kKPI=) 2025-09-19 06:38:22.143778 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFxe0LpnH5jBFQF90kk5jUwr6Gj0sEcMspO1t2upccBRo1OFgj4e1yl1esWUL5O58d54MsYhYeGxLEvWWG6Gy3NrtOVV8MBx1rU9a36dwmvA7m/B4y2H1C+fCZd1dg0n68CfdY703pgjfGuisucJArRAgs832pK2OnkVzgjFgmOfHLKTG2TFKNhnEaSQVNRPr3Utd7thQ8n0Q70MhFaojxRjetvlHTPY4/rqySBGyaWXf1LVfobHvIux5JwruQlxyihJVvNrOnhRdReVFg7sA6Xt8WGezLuRsWmLkFf8eKkIZ6JAI9e2zz1oKth4Rs8O2V8QAYeCQdqTNJk+rjDQ7jDnfYoB1qk0Q8ekxwVe/5jDuR+HK3AkiwE/PMvtVcCFj0xeApt24hhfr8fmO0QVDhT/oXgbXG0N27vNT3ri/17wYKcLj+z1SLEryoR3lagqFfGJwbOFPp784IIURhSp65CXFAICq+1hE/Vc+KbMfyQvKCmkUuJkg/lUCJuUjc2SM=) 2025-09-19 06:38:22.143791 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKHvumOugZgxeGPUtdrERuygqEt98zktuieKOF6p9eX) 2025-09-19 06:38:22.143802 | orchestrator | 2025-09-19 06:38:22.143812 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:38:22.143822 | orchestrator | Friday 19 September 2025 06:38:18 +0000 (0:00:01.027) 0:00:25.120 ****** 2025-09-19 06:38:22.143832 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC0CrQB+L1dZbegY3eWhzFrWDxDaUoA2Te0F8gKkpTfOjINk1Rf5hve2dxaGxphcDtjC5nD+45RnPnqR7bU0thw=) 2025-09-19 06:38:22.143870 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSJEDKutjSMPa0EKfE4cBnmsz3KVWpbhl1ci7cnRFsSUaPyWdULw9gtoG+maiSLs+JGD/2TxnP+/kRqaHZBbDDSUTeW/m0IeZ9s+br177p583ZIJ+P7wg1mScEAMefbdF2934shM4rGXpKgZi1OxTF1mtqIwsiuAmK9Or6P9P7l7DERtERrvjbe8liJgJp9KWk0XRwxbfCbVDPbTR1fr+DNsfaesTdmr/KIouUPYPsi6/JRS20nsaNWhi2RLbgrYdgjurhmkQmazftCmB0RSYxq3vCqPJ/f8s2koygKnYMjGIyAI1ny8cw9AQTNQsm2+Coadvf+U9AT6Ts6EQh0R3/AKtsR0k7dYeGRPudjhE4rIPcyOFDLLqI4SWZwUMr4Iq/jdIe0brpS3wInxmXwKFK3kyHYihESms6LrOkxGshdg42DAG+GizRngqqjyk1a/I3OwnF+U5PXv87MTmlZDr7N92w2cpRnUPNvghumRI9it1MLFhoZsjJEgBcTYFN0Gs=) 2025-09-19 06:38:22.143880 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFjHeqCdDeCRJOldQE40LxSQTo2BNc+kTtkyInpI/lbN) 2025-09-19 06:38:22.143890 | orchestrator | 2025-09-19 06:38:22.143900 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 06:38:22.143909 | orchestrator | Friday 19 September 2025 06:38:20 +0000 (0:00:01.061) 0:00:26.181 ****** 2025-09-19 06:38:22.143919 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv3wVSwbItcXJZJgcx5GgWOMLLvK+zgvVRuSdLGa7ZOKMZH53prN/32+jE31SWb5bgZsuMrvyuVVO1iQpKcUboaNOVIMu+ID6x+xbCH3js8XEr6YvTyTcWXoOTXDQ4JVlSDyUjXfSgbemGohGHaJAsDq41iGrUOKWNg0G8jQnRq3zTEidla0QdfyBmHK9drzGJL2eJaJhhlubN8NljVlWBEbK4NYE+2jmoIAWqESAehNW4whdUvuVIPPmTnLnrXYCQznzvbJyuhW7HQ2ad8wEnNKpHUfwhYD9XulQ+a+1puUCl1D65R6NoAH6/KHxZa/c32Ie3gUUI8a86cQ37KX63IPISHVMDSdtQXrdBbq6X+NwnJwFnXdqfDx3nI8ZszB4/EJr3AcJR5rNdz6HQUtkZDvtkjLJBK/IGOLZCMvMeEzaN7JVOwBvecJMCMUn4cLerrXsDpyAk99VT+wCJdObZHdin4cxh1R6mxw9zCMO6tczLjJp+QeMH+3wL/KvQ1Es=) 2025-09-19 06:38:22.143930 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK4Aa+LxR1ZjNbfbycXzt70C/8XksYNn8oyYQvPQp757OL/JzToJiIS8Py8IbZ0HNI/335BqZnbPP1SOdcEV2lY=) 2025-09-19 06:38:22.143940 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJlheV+NK60eW9TGHmprW8meRYUJcLJz1u2av89Y56rY) 2025-09-19 06:38:22.143949 | orchestrator | 2025-09-19 06:38:22.143959 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-19 06:38:22.143969 | orchestrator | Friday 19 September 2025 06:38:21 +0000 (0:00:01.050) 0:00:27.232 ****** 2025-09-19 06:38:22.143979 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-19 06:38:22.143989 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-19 06:38:22.143998 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-19 06:38:22.144008 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-19 06:38:22.144017 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 06:38:22.144026 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-19 06:38:22.144036 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-19 06:38:22.144046 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:38:22.144056 | orchestrator | 2025-09-19 06:38:22.144082 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-19 06:38:22.144092 | orchestrator | Friday 19 September 2025 06:38:21 +0000 (0:00:00.168) 0:00:27.400 ****** 2025-09-19 06:38:22.144102 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:38:22.144112 | orchestrator | 2025-09-19 06:38:22.144121 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-19 06:38:22.144131 | orchestrator | Friday 19 September 2025 06:38:21 +0000 (0:00:00.063) 0:00:27.464 ****** 2025-09-19 06:38:22.144140 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:38:22.144150 | orchestrator | 2025-09-19 06:38:22.144160 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-19 06:38:22.144169 | orchestrator | Friday 19 September 2025 06:38:21 +0000 (0:00:00.071) 0:00:27.536 ****** 2025-09-19 06:38:22.144188 | orchestrator | changed: [testbed-manager] 2025-09-19 06:38:22.144198 | orchestrator | 2025-09-19 06:38:22.144208 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:38:22.144218 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 06:38:22.144228 | orchestrator | 2025-09-19 06:38:22.144238 | orchestrator | 2025-09-19 06:38:22.144248 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:38:22.144258 | orchestrator | Friday 19 September 2025 06:38:21 +0000 (0:00:00.534) 0:00:28.071 ****** 2025-09-19 06:38:22.144267 | orchestrator | =============================================================================== 2025-09-19 06:38:22.144277 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.01s 2025-09-19 06:38:22.144286 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.22s 2025-09-19 06:38:22.144314 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.37s 2025-09-19 06:38:22.144324 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-09-19 06:38:22.144334 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-09-19 06:38:22.144343 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-09-19 06:38:22.144353 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-19 06:38:22.144363 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-19 06:38:22.144372 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-19 06:38:22.144382 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-19 06:38:22.144392 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-19 06:38:22.144401 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-19 06:38:22.144411 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-19 06:38:22.144421 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-19 06:38:22.144430 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-19 06:38:22.144440 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-19 06:38:22.144449 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.53s 2025-09-19 06:38:22.144459 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-09-19 06:38:22.144469 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-09-19 06:38:22.144479 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-09-19 06:38:22.418331 | orchestrator | + osism apply squid 2025-09-19 06:38:34.521257 | orchestrator | 2025-09-19 06:38:34 | INFO  | Task d5272d21-83da-4034-8b42-07d93e4364ac (squid) was prepared for execution. 2025-09-19 06:38:34.521365 | orchestrator | 2025-09-19 06:38:34 | INFO  | It takes a moment until task d5272d21-83da-4034-8b42-07d93e4364ac (squid) has been started and output is visible here. 2025-09-19 06:40:30.046339 | orchestrator | 2025-09-19 06:40:30.046460 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-19 06:40:30.046478 | orchestrator | 2025-09-19 06:40:30.046491 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-19 06:40:30.046503 | orchestrator | Friday 19 September 2025 06:38:38 +0000 (0:00:00.164) 0:00:00.164 ****** 2025-09-19 06:40:30.046532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 06:40:30.046545 | orchestrator | 2025-09-19 06:40:30.046556 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-19 06:40:30.046594 | orchestrator | Friday 19 September 2025 06:38:38 +0000 (0:00:00.094) 0:00:00.259 ****** 2025-09-19 06:40:30.046606 | orchestrator | ok: [testbed-manager] 2025-09-19 06:40:30.046617 | orchestrator | 2025-09-19 06:40:30.046682 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-19 06:40:30.046694 | orchestrator | Friday 19 September 2025 06:38:40 +0000 (0:00:02.462) 0:00:02.722 ****** 2025-09-19 06:40:30.046705 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-19 06:40:30.046716 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-19 06:40:30.046727 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-19 06:40:30.046738 | orchestrator | 2025-09-19 06:40:30.046748 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-19 06:40:30.046759 | orchestrator | Friday 19 September 2025 06:38:42 +0000 (0:00:01.162) 0:00:03.884 ****** 2025-09-19 06:40:30.046770 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-19 06:40:30.046781 | orchestrator | 2025-09-19 06:40:30.046791 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-19 06:40:30.046802 | orchestrator | Friday 19 September 2025 06:38:43 +0000 (0:00:01.051) 0:00:04.935 ****** 2025-09-19 06:40:30.046813 | orchestrator | ok: [testbed-manager] 2025-09-19 06:40:30.046823 | orchestrator | 2025-09-19 06:40:30.046834 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-19 06:40:30.046845 | orchestrator | Friday 19 September 2025 06:38:43 +0000 (0:00:00.349) 0:00:05.285 ****** 2025-09-19 06:40:30.046855 | orchestrator | changed: [testbed-manager] 2025-09-19 06:40:30.046866 | orchestrator | 2025-09-19 06:40:30.046879 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-19 06:40:30.046892 | orchestrator | Friday 19 September 2025 06:38:44 +0000 (0:00:00.919) 0:00:06.205 ****** 2025-09-19 06:40:30.046904 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-19 06:40:30.046918 | orchestrator | ok: [testbed-manager] 2025-09-19 06:40:30.046930 | orchestrator | 2025-09-19 06:40:30.046942 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-19 06:40:30.046955 | orchestrator | Friday 19 September 2025 06:39:16 +0000 (0:00:32.212) 0:00:38.417 ****** 2025-09-19 06:40:30.046968 | orchestrator | changed: [testbed-manager] 2025-09-19 06:40:30.046980 | orchestrator | 2025-09-19 06:40:30.046993 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-19 06:40:30.047007 | orchestrator | Friday 19 September 2025 06:39:29 +0000 (0:00:12.327) 0:00:50.745 ****** 2025-09-19 06:40:30.047020 | orchestrator | Pausing for 60 seconds 2025-09-19 06:40:30.047034 | orchestrator | changed: [testbed-manager] 2025-09-19 06:40:30.047047 | orchestrator | 2025-09-19 06:40:30.047060 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-19 06:40:30.047073 | orchestrator | Friday 19 September 2025 06:40:29 +0000 (0:01:00.067) 0:01:50.812 ****** 2025-09-19 06:40:30.047085 | orchestrator | ok: [testbed-manager] 2025-09-19 06:40:30.047098 | orchestrator | 2025-09-19 06:40:30.047111 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-19 06:40:30.047123 | orchestrator | Friday 19 September 2025 06:40:29 +0000 (0:00:00.073) 0:01:50.885 ****** 2025-09-19 06:40:30.047135 | orchestrator | changed: [testbed-manager] 2025-09-19 06:40:30.047148 | orchestrator | 2025-09-19 06:40:30.047161 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:40:30.047173 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:40:30.047186 | orchestrator | 2025-09-19 06:40:30.047198 | orchestrator | 2025-09-19 06:40:30.047211 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:40:30.047224 | orchestrator | Friday 19 September 2025 06:40:29 +0000 (0:00:00.632) 0:01:51.518 ****** 2025-09-19 06:40:30.047246 | orchestrator | =============================================================================== 2025-09-19 06:40:30.047257 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-09-19 06:40:30.047268 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.21s 2025-09-19 06:40:30.047279 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.33s 2025-09-19 06:40:30.047289 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.46s 2025-09-19 06:40:30.047300 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.16s 2025-09-19 06:40:30.047310 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.05s 2025-09-19 06:40:30.047321 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2025-09-19 06:40:30.047332 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2025-09-19 06:40:30.047342 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-09-19 06:40:30.047353 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-09-19 06:40:30.047364 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-09-19 06:40:30.326807 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 06:40:30.326899 | orchestrator | ++ semver latest 9.0.0 2025-09-19 06:40:30.381576 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-19 06:40:30.381683 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 06:40:30.381700 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-19 06:40:42.308069 | orchestrator | 2025-09-19 06:40:42 | INFO  | Task 4742de70-98c9-4408-aee7-6490549c1f1f (operator) was prepared for execution. 2025-09-19 06:40:42.308183 | orchestrator | 2025-09-19 06:40:42 | INFO  | It takes a moment until task 4742de70-98c9-4408-aee7-6490549c1f1f (operator) has been started and output is visible here. 2025-09-19 06:40:57.823527 | orchestrator | 2025-09-19 06:40:57.823664 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-19 06:40:57.823681 | orchestrator | 2025-09-19 06:40:57.823692 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 06:40:57.823702 | orchestrator | Friday 19 September 2025 06:40:46 +0000 (0:00:00.121) 0:00:00.121 ****** 2025-09-19 06:40:57.823730 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:40:57.823742 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:40:57.823752 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:40:57.823761 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:40:57.823771 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:40:57.823781 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:40:57.823790 | orchestrator | 2025-09-19 06:40:57.823801 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-19 06:40:57.823811 | orchestrator | Friday 19 September 2025 06:40:49 +0000 (0:00:03.253) 0:00:03.374 ****** 2025-09-19 06:40:57.823820 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:40:57.823830 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:40:57.823840 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:40:57.823850 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:40:57.823859 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:40:57.823869 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:40:57.823879 | orchestrator | 2025-09-19 06:40:57.823888 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-19 06:40:57.823898 | orchestrator | 2025-09-19 06:40:57.823908 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-19 06:40:57.823918 | orchestrator | Friday 19 September 2025 06:40:50 +0000 (0:00:00.723) 0:00:04.098 ****** 2025-09-19 06:40:57.823927 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:40:57.823937 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:40:57.823947 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:40:57.823956 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:40:57.823966 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:40:57.823975 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:40:57.824008 | orchestrator | 2025-09-19 06:40:57.824018 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-19 06:40:57.824028 | orchestrator | Friday 19 September 2025 06:40:50 +0000 (0:00:00.151) 0:00:04.249 ****** 2025-09-19 06:40:57.824037 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:40:57.824047 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:40:57.824056 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:40:57.824066 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:40:57.824076 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:40:57.824087 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:40:57.824099 | orchestrator | 2025-09-19 06:40:57.824110 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-19 06:40:57.824120 | orchestrator | Friday 19 September 2025 06:40:50 +0000 (0:00:00.183) 0:00:04.433 ****** 2025-09-19 06:40:57.824132 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:40:57.824145 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:40:57.824156 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:40:57.824168 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:40:57.824179 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:40:57.824190 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:40:57.824201 | orchestrator | 2025-09-19 06:40:57.824213 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-19 06:40:57.824224 | orchestrator | Friday 19 September 2025 06:40:50 +0000 (0:00:00.575) 0:00:05.009 ****** 2025-09-19 06:40:57.824235 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:40:57.824246 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:40:57.824257 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:40:57.824268 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:40:57.824278 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:40:57.824289 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:40:57.824300 | orchestrator | 2025-09-19 06:40:57.824311 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-19 06:40:57.824323 | orchestrator | Friday 19 September 2025 06:40:51 +0000 (0:00:00.866) 0:00:05.875 ****** 2025-09-19 06:40:57.824334 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-19 06:40:57.824345 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-19 06:40:57.824356 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-19 06:40:57.824367 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-19 06:40:57.824378 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-19 06:40:57.824389 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-19 06:40:57.824400 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-19 06:40:57.824411 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-19 06:40:57.824422 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-19 06:40:57.824433 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-19 06:40:57.824442 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-19 06:40:57.824452 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-19 06:40:57.824461 | orchestrator | 2025-09-19 06:40:57.824471 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-19 06:40:57.824480 | orchestrator | Friday 19 September 2025 06:40:53 +0000 (0:00:01.211) 0:00:07.087 ****** 2025-09-19 06:40:57.824490 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:40:57.824499 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:40:57.824509 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:40:57.824518 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:40:57.824528 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:40:57.824537 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:40:57.824547 | orchestrator | 2025-09-19 06:40:57.824556 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-19 06:40:57.824566 | orchestrator | Friday 19 September 2025 06:40:54 +0000 (0:00:01.353) 0:00:08.441 ****** 2025-09-19 06:40:57.824576 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-19 06:40:57.824592 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-19 06:40:57.824602 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-19 06:40:57.824611 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 06:40:57.824652 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 06:40:57.824662 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 06:40:57.824672 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 06:40:57.824682 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 06:40:57.824691 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 06:40:57.824701 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-19 06:40:57.824710 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-19 06:40:57.824720 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-19 06:40:57.824729 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-19 06:40:57.824739 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-19 06:40:57.824748 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-19 06:40:57.824758 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-19 06:40:57.824767 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-19 06:40:57.824777 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-19 06:40:57.824786 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-19 06:40:57.824796 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-19 06:40:57.824805 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-19 06:40:57.824815 | orchestrator | 2025-09-19 06:40:57.824824 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-19 06:40:57.824834 | orchestrator | Friday 19 September 2025 06:40:55 +0000 (0:00:01.251) 0:00:09.693 ****** 2025-09-19 06:40:57.824844 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:40:57.824853 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:40:57.824863 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:40:57.824872 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:40:57.824882 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:40:57.824891 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:40:57.824901 | orchestrator | 2025-09-19 06:40:57.824910 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-19 06:40:57.824920 | orchestrator | Friday 19 September 2025 06:40:55 +0000 (0:00:00.180) 0:00:09.873 ****** 2025-09-19 06:40:57.824929 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:40:57.824938 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:40:57.824948 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:40:57.824957 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:40:57.824967 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:40:57.824976 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:40:57.824986 | orchestrator | 2025-09-19 06:40:57.824995 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-19 06:40:57.825005 | orchestrator | Friday 19 September 2025 06:40:56 +0000 (0:00:00.592) 0:00:10.466 ****** 2025-09-19 06:40:57.825014 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:40:57.825024 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:40:57.825033 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:40:57.825043 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:40:57.825052 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:40:57.825062 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:40:57.825071 | orchestrator | 2025-09-19 06:40:57.825087 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-19 06:40:57.825097 | orchestrator | Friday 19 September 2025 06:40:56 +0000 (0:00:00.165) 0:00:10.631 ****** 2025-09-19 06:40:57.825106 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 06:40:57.825119 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:40:57.825129 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-19 06:40:57.825139 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:40:57.825148 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-19 06:40:57.825157 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:40:57.825167 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 06:40:57.825176 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:40:57.825186 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 06:40:57.825195 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:40:57.825205 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 06:40:57.825214 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:40:57.825224 | orchestrator | 2025-09-19 06:40:57.825233 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-19 06:40:57.825243 | orchestrator | Friday 19 September 2025 06:40:57 +0000 (0:00:00.729) 0:00:11.360 ****** 2025-09-19 06:40:57.825252 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:40:57.825262 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:40:57.825271 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:40:57.825281 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:40:57.825290 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:40:57.825299 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:40:57.825309 | orchestrator | 2025-09-19 06:40:57.825318 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-19 06:40:57.825334 | orchestrator | Friday 19 September 2025 06:40:57 +0000 (0:00:00.181) 0:00:11.542 ****** 2025-09-19 06:40:57.825344 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:40:57.825353 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:40:57.825363 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:40:57.825372 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:40:57.825382 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:40:57.825391 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:40:57.825401 | orchestrator | 2025-09-19 06:40:57.825411 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-19 06:40:57.825420 | orchestrator | Friday 19 September 2025 06:40:57 +0000 (0:00:00.154) 0:00:11.697 ****** 2025-09-19 06:40:57.825434 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:40:57.825444 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:40:57.825454 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:40:57.825464 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:40:57.825479 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:40:58.948806 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:40:58.948919 | orchestrator | 2025-09-19 06:40:58.948936 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-19 06:40:58.948948 | orchestrator | Friday 19 September 2025 06:40:57 +0000 (0:00:00.133) 0:00:11.831 ****** 2025-09-19 06:40:58.948960 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:40:58.948970 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:40:58.948981 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:40:58.948992 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:40:58.949003 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:40:58.949013 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:40:58.949025 | orchestrator | 2025-09-19 06:40:58.949036 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-19 06:40:58.949047 | orchestrator | Friday 19 September 2025 06:40:58 +0000 (0:00:00.656) 0:00:12.487 ****** 2025-09-19 06:40:58.949057 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:40:58.949068 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:40:58.949078 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:40:58.949118 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:40:58.949130 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:40:58.949140 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:40:58.949151 | orchestrator | 2025-09-19 06:40:58.949162 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:40:58.949174 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:40:58.949186 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:40:58.949197 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:40:58.949208 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:40:58.949218 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:40:58.949229 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:40:58.949240 | orchestrator | 2025-09-19 06:40:58.949251 | orchestrator | 2025-09-19 06:40:58.949261 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:40:58.949272 | orchestrator | Friday 19 September 2025 06:40:58 +0000 (0:00:00.243) 0:00:12.731 ****** 2025-09-19 06:40:58.949283 | orchestrator | =============================================================================== 2025-09-19 06:40:58.949294 | orchestrator | Gathering Facts --------------------------------------------------------- 3.25s 2025-09-19 06:40:58.949305 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.35s 2025-09-19 06:40:58.949316 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.25s 2025-09-19 06:40:58.949327 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2025-09-19 06:40:58.949341 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.87s 2025-09-19 06:40:58.949354 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2025-09-19 06:40:58.949366 | orchestrator | Do not require tty for all users ---------------------------------------- 0.72s 2025-09-19 06:40:58.949379 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2025-09-19 06:40:58.949391 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-09-19 06:40:58.949404 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.58s 2025-09-19 06:40:58.949417 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2025-09-19 06:40:58.949431 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2025-09-19 06:40:58.949443 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2025-09-19 06:40:58.949456 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2025-09-19 06:40:58.949468 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-09-19 06:40:58.949481 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-09-19 06:40:58.949493 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2025-09-19 06:40:58.949505 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2025-09-19 06:40:59.230297 | orchestrator | + osism apply --environment custom facts 2025-09-19 06:41:01.061425 | orchestrator | 2025-09-19 06:41:01 | INFO  | Trying to run play facts in environment custom 2025-09-19 06:41:11.169544 | orchestrator | 2025-09-19 06:41:11 | INFO  | Task 282c84a3-c601-4f17-912b-c252ffca5bc8 (facts) was prepared for execution. 2025-09-19 06:41:11.169666 | orchestrator | 2025-09-19 06:41:11 | INFO  | It takes a moment until task 282c84a3-c601-4f17-912b-c252ffca5bc8 (facts) has been started and output is visible here. 2025-09-19 06:41:53.509274 | orchestrator | 2025-09-19 06:41:53.509371 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-19 06:41:53.509380 | orchestrator | 2025-09-19 06:41:53.509385 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 06:41:53.509389 | orchestrator | Friday 19 September 2025 06:41:14 +0000 (0:00:00.086) 0:00:00.086 ****** 2025-09-19 06:41:53.509393 | orchestrator | ok: [testbed-manager] 2025-09-19 06:41:53.509399 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:41:53.509404 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:53.509408 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:53.509412 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:41:53.509416 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:41:53.509420 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:53.509424 | orchestrator | 2025-09-19 06:41:53.509428 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-19 06:41:53.509432 | orchestrator | Friday 19 September 2025 06:41:16 +0000 (0:00:01.441) 0:00:01.528 ****** 2025-09-19 06:41:53.509436 | orchestrator | ok: [testbed-manager] 2025-09-19 06:41:53.509439 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:53.509443 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:41:53.509447 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:53.509451 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:41:53.509454 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:53.509458 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:41:53.509462 | orchestrator | 2025-09-19 06:41:53.509466 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-19 06:41:53.509469 | orchestrator | 2025-09-19 06:41:53.509473 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 06:41:53.509477 | orchestrator | Friday 19 September 2025 06:41:17 +0000 (0:00:01.186) 0:00:02.715 ****** 2025-09-19 06:41:53.509481 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:53.509485 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:53.509488 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:53.509492 | orchestrator | 2025-09-19 06:41:53.509496 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 06:41:53.509500 | orchestrator | Friday 19 September 2025 06:41:17 +0000 (0:00:00.115) 0:00:02.830 ****** 2025-09-19 06:41:53.509504 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:53.509508 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:53.509511 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:53.509515 | orchestrator | 2025-09-19 06:41:53.509519 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 06:41:53.509523 | orchestrator | Friday 19 September 2025 06:41:17 +0000 (0:00:00.230) 0:00:03.061 ****** 2025-09-19 06:41:53.509527 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:53.509531 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:53.509534 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:53.509538 | orchestrator | 2025-09-19 06:41:53.509542 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 06:41:53.509546 | orchestrator | Friday 19 September 2025 06:41:18 +0000 (0:00:00.211) 0:00:03.273 ****** 2025-09-19 06:41:53.509551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:41:53.509556 | orchestrator | 2025-09-19 06:41:53.509560 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 06:41:53.509563 | orchestrator | Friday 19 September 2025 06:41:18 +0000 (0:00:00.151) 0:00:03.425 ****** 2025-09-19 06:41:53.509584 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:53.509588 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:53.509632 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:53.509637 | orchestrator | 2025-09-19 06:41:53.509640 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 06:41:53.509644 | orchestrator | Friday 19 September 2025 06:41:18 +0000 (0:00:00.466) 0:00:03.892 ****** 2025-09-19 06:41:53.509648 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:41:53.509652 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:41:53.509655 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:41:53.509659 | orchestrator | 2025-09-19 06:41:53.509663 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 06:41:53.509666 | orchestrator | Friday 19 September 2025 06:41:18 +0000 (0:00:00.102) 0:00:03.994 ****** 2025-09-19 06:41:53.509670 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:53.509674 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:53.509677 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:53.509681 | orchestrator | 2025-09-19 06:41:53.509685 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 06:41:53.509689 | orchestrator | Friday 19 September 2025 06:41:19 +0000 (0:00:01.029) 0:00:05.024 ****** 2025-09-19 06:41:53.509692 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:53.509696 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:53.509700 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:53.509703 | orchestrator | 2025-09-19 06:41:53.509707 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 06:41:53.509711 | orchestrator | Friday 19 September 2025 06:41:20 +0000 (0:00:00.464) 0:00:05.488 ****** 2025-09-19 06:41:53.509715 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:53.509719 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:53.509723 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:53.509726 | orchestrator | 2025-09-19 06:41:53.509730 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 06:41:53.509734 | orchestrator | Friday 19 September 2025 06:41:21 +0000 (0:00:01.063) 0:00:06.552 ****** 2025-09-19 06:41:53.509750 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:53.509755 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:53.509758 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:53.509762 | orchestrator | 2025-09-19 06:41:53.509766 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-19 06:41:53.509769 | orchestrator | Friday 19 September 2025 06:41:37 +0000 (0:00:16.115) 0:00:22.668 ****** 2025-09-19 06:41:53.509773 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:41:53.509779 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:41:53.509783 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:41:53.509787 | orchestrator | 2025-09-19 06:41:53.509791 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-19 06:41:53.509806 | orchestrator | Friday 19 September 2025 06:41:37 +0000 (0:00:00.107) 0:00:22.775 ****** 2025-09-19 06:41:53.509810 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:41:53.509814 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:41:53.509818 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:41:53.509821 | orchestrator | 2025-09-19 06:41:53.509825 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 06:41:53.509829 | orchestrator | Friday 19 September 2025 06:41:44 +0000 (0:00:06.900) 0:00:29.676 ****** 2025-09-19 06:41:53.509833 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:53.509838 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:53.509842 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:53.509846 | orchestrator | 2025-09-19 06:41:53.509850 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-19 06:41:53.509855 | orchestrator | Friday 19 September 2025 06:41:44 +0000 (0:00:00.418) 0:00:30.094 ****** 2025-09-19 06:41:53.509859 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-19 06:41:53.509868 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-19 06:41:53.509872 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-19 06:41:53.509877 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-19 06:41:53.509881 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-19 06:41:53.509885 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-19 06:41:53.509889 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-19 06:41:53.509894 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-19 06:41:53.509898 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-19 06:41:53.509902 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-19 06:41:53.509907 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-19 06:41:53.509911 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-19 06:41:53.509916 | orchestrator | 2025-09-19 06:41:53.509920 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 06:41:53.509924 | orchestrator | Friday 19 September 2025 06:41:48 +0000 (0:00:03.411) 0:00:33.505 ****** 2025-09-19 06:41:53.509929 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:53.509934 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:53.509940 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:53.509946 | orchestrator | 2025-09-19 06:41:53.509952 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 06:41:53.509958 | orchestrator | 2025-09-19 06:41:53.509963 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 06:41:53.509969 | orchestrator | Friday 19 September 2025 06:41:49 +0000 (0:00:01.186) 0:00:34.692 ****** 2025-09-19 06:41:53.509976 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:41:53.509982 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:41:53.509989 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:41:53.509996 | orchestrator | ok: [testbed-manager] 2025-09-19 06:41:53.510002 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:41:53.510007 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:41:53.510011 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:41:53.510052 | orchestrator | 2025-09-19 06:41:53.510057 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:41:53.510064 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:41:53.510071 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:41:53.510079 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:41:53.510085 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:41:53.510091 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:41:53.510099 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:41:53.510107 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:41:53.510113 | orchestrator | 2025-09-19 06:41:53.510119 | orchestrator | 2025-09-19 06:41:53.510125 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:41:53.510141 | orchestrator | Friday 19 September 2025 06:41:53 +0000 (0:00:03.908) 0:00:38.601 ****** 2025-09-19 06:41:53.510147 | orchestrator | =============================================================================== 2025-09-19 06:41:53.510167 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.12s 2025-09-19 06:41:53.510173 | orchestrator | Install required packages (Debian) -------------------------------------- 6.90s 2025-09-19 06:41:53.510179 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.91s 2025-09-19 06:41:53.510186 | orchestrator | Copy fact files --------------------------------------------------------- 3.41s 2025-09-19 06:41:53.510196 | orchestrator | Create custom facts directory ------------------------------------------- 1.44s 2025-09-19 06:41:53.510200 | orchestrator | Copy fact file ---------------------------------------------------------- 1.19s 2025-09-19 06:41:53.510210 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.19s 2025-09-19 06:41:53.727567 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2025-09-19 06:41:53.727715 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2025-09-19 06:41:53.727730 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2025-09-19 06:41:53.727740 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-09-19 06:41:53.727751 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-09-19 06:41:53.727761 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2025-09-19 06:41:53.727770 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2025-09-19 06:41:53.727780 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-09-19 06:41:53.727790 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-09-19 06:41:53.727800 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-09-19 06:41:53.727809 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-09-19 06:41:53.994675 | orchestrator | + osism apply bootstrap 2025-09-19 06:42:05.939198 | orchestrator | 2025-09-19 06:42:05 | INFO  | Task 350bcd3c-54ca-4ce9-88fe-b6ab0d2dc0f2 (bootstrap) was prepared for execution. 2025-09-19 06:42:05.939306 | orchestrator | 2025-09-19 06:42:05 | INFO  | It takes a moment until task 350bcd3c-54ca-4ce9-88fe-b6ab0d2dc0f2 (bootstrap) has been started and output is visible here. 2025-09-19 06:42:21.456568 | orchestrator | 2025-09-19 06:42:21.456728 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-19 06:42:21.456743 | orchestrator | 2025-09-19 06:42:21.456754 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-19 06:42:21.456764 | orchestrator | Friday 19 September 2025 06:42:10 +0000 (0:00:00.165) 0:00:00.165 ****** 2025-09-19 06:42:21.456774 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:21.456785 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:21.456795 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:21.456805 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:21.456814 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:21.456824 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:21.456834 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:21.456843 | orchestrator | 2025-09-19 06:42:21.456853 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 06:42:21.456863 | orchestrator | 2025-09-19 06:42:21.456873 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 06:42:21.456883 | orchestrator | Friday 19 September 2025 06:42:10 +0000 (0:00:00.242) 0:00:00.407 ****** 2025-09-19 06:42:21.456892 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:21.456902 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:21.456912 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:21.456921 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:21.456931 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:21.456940 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:21.456950 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:21.456985 | orchestrator | 2025-09-19 06:42:21.456995 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-19 06:42:21.457005 | orchestrator | 2025-09-19 06:42:21.457015 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 06:42:21.457024 | orchestrator | Friday 19 September 2025 06:42:13 +0000 (0:00:03.516) 0:00:03.924 ****** 2025-09-19 06:42:21.457034 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-19 06:42:21.457044 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-19 06:42:21.457054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-19 06:42:21.457063 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-19 06:42:21.457072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 06:42:21.457082 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-19 06:42:21.457091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 06:42:21.457101 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 06:42:21.457110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 06:42:21.457121 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-19 06:42:21.457132 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-19 06:42:21.457143 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-19 06:42:21.457154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 06:42:21.457165 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-19 06:42:21.457176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 06:42:21.457187 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-19 06:42:21.457199 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-19 06:42:21.457210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 06:42:21.457220 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:42:21.457232 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-19 06:42:21.457242 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-19 06:42:21.457253 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-19 06:42:21.457264 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 06:42:21.457275 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-19 06:42:21.457286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-19 06:42:21.457297 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 06:42:21.457308 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:42:21.457319 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 06:42:21.457329 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 06:42:21.457341 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:42:21.457351 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-19 06:42:21.457363 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 06:42:21.457373 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 06:42:21.457384 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-19 06:42:21.457395 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 06:42:21.457406 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:42:21.457435 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-19 06:42:21.457447 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 06:42:21.457458 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-19 06:42:21.457469 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-19 06:42:21.457479 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 06:42:21.457496 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-19 06:42:21.457505 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-19 06:42:21.457516 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 06:42:21.457525 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-19 06:42:21.457535 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-19 06:42:21.457560 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 06:42:21.457570 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-19 06:42:21.457580 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-19 06:42:21.457651 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:42:21.457661 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-19 06:42:21.457670 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 06:42:21.457680 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:42:21.457689 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-19 06:42:21.457699 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-19 06:42:21.457708 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:42:21.457718 | orchestrator | 2025-09-19 06:42:21.457727 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-19 06:42:21.457737 | orchestrator | 2025-09-19 06:42:21.457746 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-19 06:42:21.457756 | orchestrator | Friday 19 September 2025 06:42:14 +0000 (0:00:00.488) 0:00:04.412 ****** 2025-09-19 06:42:21.457766 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:21.457775 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:21.457785 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:21.457794 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:21.457804 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:21.457813 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:21.457823 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:21.457832 | orchestrator | 2025-09-19 06:42:21.457842 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-19 06:42:21.457851 | orchestrator | Friday 19 September 2025 06:42:15 +0000 (0:00:01.164) 0:00:05.577 ****** 2025-09-19 06:42:21.457861 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:21.457870 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:21.457880 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:21.457889 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:21.457899 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:21.457908 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:21.457918 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:21.457927 | orchestrator | 2025-09-19 06:42:21.457937 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-19 06:42:21.457947 | orchestrator | Friday 19 September 2025 06:42:16 +0000 (0:00:01.154) 0:00:06.731 ****** 2025-09-19 06:42:21.457957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:42:21.457970 | orchestrator | 2025-09-19 06:42:21.457979 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-19 06:42:21.457989 | orchestrator | Friday 19 September 2025 06:42:16 +0000 (0:00:00.287) 0:00:07.019 ****** 2025-09-19 06:42:21.457998 | orchestrator | changed: [testbed-manager] 2025-09-19 06:42:21.458008 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:42:21.458077 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:42:21.458087 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:42:21.458097 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:21.458106 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:21.458116 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:21.458125 | orchestrator | 2025-09-19 06:42:21.458144 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-19 06:42:21.458154 | orchestrator | Friday 19 September 2025 06:42:18 +0000 (0:00:02.039) 0:00:09.058 ****** 2025-09-19 06:42:21.458164 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:42:21.458175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:42:21.458186 | orchestrator | 2025-09-19 06:42:21.458202 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-19 06:42:21.458212 | orchestrator | Friday 19 September 2025 06:42:19 +0000 (0:00:00.280) 0:00:09.339 ****** 2025-09-19 06:42:21.458221 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:42:21.458231 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:42:21.458240 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:42:21.458249 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:21.458259 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:21.458268 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:21.458278 | orchestrator | 2025-09-19 06:42:21.458287 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-19 06:42:21.458297 | orchestrator | Friday 19 September 2025 06:42:20 +0000 (0:00:01.032) 0:00:10.372 ****** 2025-09-19 06:42:21.458306 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:42:21.458316 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:42:21.458325 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:42:21.458335 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:21.458344 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:21.458353 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:42:21.458363 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:21.458372 | orchestrator | 2025-09-19 06:42:21.458382 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-19 06:42:21.458391 | orchestrator | Friday 19 September 2025 06:42:20 +0000 (0:00:00.595) 0:00:10.967 ****** 2025-09-19 06:42:21.458401 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:42:21.458411 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:42:21.458420 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:42:21.458429 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:42:21.458439 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:42:21.458448 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:42:21.458457 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:21.458467 | orchestrator | 2025-09-19 06:42:21.458477 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-19 06:42:21.458487 | orchestrator | Friday 19 September 2025 06:42:21 +0000 (0:00:00.455) 0:00:11.423 ****** 2025-09-19 06:42:21.458497 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:42:21.458506 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:42:21.458524 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:42:33.863147 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:42:33.863261 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:42:33.863275 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:42:33.863287 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:42:33.863298 | orchestrator | 2025-09-19 06:42:33.863311 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-19 06:42:33.863323 | orchestrator | Friday 19 September 2025 06:42:21 +0000 (0:00:00.226) 0:00:11.649 ****** 2025-09-19 06:42:33.863336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:42:33.863365 | orchestrator | 2025-09-19 06:42:33.863377 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-19 06:42:33.863389 | orchestrator | Friday 19 September 2025 06:42:21 +0000 (0:00:00.305) 0:00:11.955 ****** 2025-09-19 06:42:33.863428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:42:33.863439 | orchestrator | 2025-09-19 06:42:33.863451 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-19 06:42:33.863462 | orchestrator | Friday 19 September 2025 06:42:22 +0000 (0:00:00.326) 0:00:12.282 ****** 2025-09-19 06:42:33.863473 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:33.863484 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:33.863495 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:33.863506 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:33.863517 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:33.863527 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:33.863538 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:33.863549 | orchestrator | 2025-09-19 06:42:33.863560 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-19 06:42:33.863571 | orchestrator | Friday 19 September 2025 06:42:23 +0000 (0:00:01.445) 0:00:13.727 ****** 2025-09-19 06:42:33.863629 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:42:33.863640 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:42:33.863651 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:42:33.863662 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:42:33.863673 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:42:33.863684 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:42:33.863696 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:42:33.863709 | orchestrator | 2025-09-19 06:42:33.863722 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-19 06:42:33.863734 | orchestrator | Friday 19 September 2025 06:42:23 +0000 (0:00:00.275) 0:00:14.002 ****** 2025-09-19 06:42:33.863747 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:33.863759 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:33.863772 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:33.863784 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:33.863797 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:33.863808 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:33.863821 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:33.863833 | orchestrator | 2025-09-19 06:42:33.863845 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-19 06:42:33.863858 | orchestrator | Friday 19 September 2025 06:42:24 +0000 (0:00:00.570) 0:00:14.573 ****** 2025-09-19 06:42:33.863870 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:42:33.863882 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:42:33.863895 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:42:33.863907 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:42:33.863920 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:42:33.863932 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:42:33.863944 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:42:33.863957 | orchestrator | 2025-09-19 06:42:33.863969 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-19 06:42:33.863983 | orchestrator | Friday 19 September 2025 06:42:24 +0000 (0:00:00.293) 0:00:14.867 ****** 2025-09-19 06:42:33.863994 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:33.864005 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:42:33.864016 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:42:33.864027 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:42:33.864038 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:33.864049 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:33.864060 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:33.864070 | orchestrator | 2025-09-19 06:42:33.864082 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-19 06:42:33.864092 | orchestrator | Friday 19 September 2025 06:42:25 +0000 (0:00:00.607) 0:00:15.475 ****** 2025-09-19 06:42:33.864116 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:33.864127 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:42:33.864137 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:42:33.864148 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:33.864159 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:42:33.864170 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:33.864181 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:33.864192 | orchestrator | 2025-09-19 06:42:33.864203 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-19 06:42:33.864213 | orchestrator | Friday 19 September 2025 06:42:26 +0000 (0:00:01.129) 0:00:16.604 ****** 2025-09-19 06:42:33.864224 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:33.864235 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:33.864246 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:33.864257 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:33.864268 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:33.864279 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:33.864290 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:33.864301 | orchestrator | 2025-09-19 06:42:33.864312 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-19 06:42:33.864323 | orchestrator | Friday 19 September 2025 06:42:27 +0000 (0:00:01.170) 0:00:17.774 ****** 2025-09-19 06:42:33.864351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:42:33.864363 | orchestrator | 2025-09-19 06:42:33.864375 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-19 06:42:33.864386 | orchestrator | Friday 19 September 2025 06:42:28 +0000 (0:00:00.433) 0:00:18.208 ****** 2025-09-19 06:42:33.864396 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:42:33.864407 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:42:33.864418 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:42:33.864429 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:33.864440 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:42:33.864451 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:33.864462 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:33.864473 | orchestrator | 2025-09-19 06:42:33.864484 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 06:42:33.864495 | orchestrator | Friday 19 September 2025 06:42:29 +0000 (0:00:01.253) 0:00:19.462 ****** 2025-09-19 06:42:33.864506 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:33.864516 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:33.864527 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:33.864538 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:33.864549 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:33.864560 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:33.864629 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:33.864643 | orchestrator | 2025-09-19 06:42:33.864654 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 06:42:33.864665 | orchestrator | Friday 19 September 2025 06:42:29 +0000 (0:00:00.201) 0:00:19.664 ****** 2025-09-19 06:42:33.864676 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:33.864687 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:33.864697 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:33.864708 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:33.864719 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:33.864730 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:33.864740 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:33.864751 | orchestrator | 2025-09-19 06:42:33.864762 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 06:42:33.864773 | orchestrator | Friday 19 September 2025 06:42:29 +0000 (0:00:00.211) 0:00:19.876 ****** 2025-09-19 06:42:33.864784 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:33.864837 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:33.864857 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:33.864868 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:33.864879 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:33.864890 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:33.864901 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:33.864911 | orchestrator | 2025-09-19 06:42:33.864922 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 06:42:33.864934 | orchestrator | Friday 19 September 2025 06:42:29 +0000 (0:00:00.246) 0:00:20.122 ****** 2025-09-19 06:42:33.864945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:42:33.864959 | orchestrator | 2025-09-19 06:42:33.864970 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 06:42:33.864981 | orchestrator | Friday 19 September 2025 06:42:30 +0000 (0:00:00.297) 0:00:20.419 ****** 2025-09-19 06:42:33.864992 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:33.865002 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:33.865013 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:33.865024 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:33.865035 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:33.865045 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:33.865056 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:33.865067 | orchestrator | 2025-09-19 06:42:33.865083 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 06:42:33.865094 | orchestrator | Friday 19 September 2025 06:42:30 +0000 (0:00:00.514) 0:00:20.933 ****** 2025-09-19 06:42:33.865105 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:42:33.865116 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:42:33.865126 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:42:33.865137 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:42:33.865148 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:42:33.865158 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:42:33.865169 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:42:33.865180 | orchestrator | 2025-09-19 06:42:33.865191 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 06:42:33.865202 | orchestrator | Friday 19 September 2025 06:42:31 +0000 (0:00:00.247) 0:00:21.181 ****** 2025-09-19 06:42:33.865213 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:33.865223 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:33.865234 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:33.865245 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:33.865256 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:42:33.865266 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:42:33.865277 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:42:33.865288 | orchestrator | 2025-09-19 06:42:33.865299 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 06:42:33.865310 | orchestrator | Friday 19 September 2025 06:42:32 +0000 (0:00:01.102) 0:00:22.283 ****** 2025-09-19 06:42:33.865321 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:33.865331 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:33.865342 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:33.865353 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:33.865364 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:42:33.865375 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:42:33.865386 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:42:33.865396 | orchestrator | 2025-09-19 06:42:33.865407 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 06:42:33.865418 | orchestrator | Friday 19 September 2025 06:42:32 +0000 (0:00:00.560) 0:00:22.843 ****** 2025-09-19 06:42:33.865430 | orchestrator | ok: [testbed-manager] 2025-09-19 06:42:33.865440 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:42:33.865451 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:42:33.865462 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:42:33.865487 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:43:13.893921 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:43:13.894169 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:43:13.894191 | orchestrator | 2025-09-19 06:43:13.894204 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 06:43:13.894216 | orchestrator | Friday 19 September 2025 06:42:33 +0000 (0:00:01.146) 0:00:23.989 ****** 2025-09-19 06:43:13.894228 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:43:13.894240 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:43:13.894251 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:43:13.894262 | orchestrator | changed: [testbed-manager] 2025-09-19 06:43:13.894273 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:43:13.894284 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:43:13.894295 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:43:13.894305 | orchestrator | 2025-09-19 06:43:13.894316 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-19 06:43:13.894327 | orchestrator | Friday 19 September 2025 06:42:50 +0000 (0:00:16.567) 0:00:40.557 ****** 2025-09-19 06:43:13.894338 | orchestrator | ok: [testbed-manager] 2025-09-19 06:43:13.894349 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:43:13.894359 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:43:13.894370 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:43:13.894381 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:43:13.894391 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:43:13.894402 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:43:13.894413 | orchestrator | 2025-09-19 06:43:13.894424 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-19 06:43:13.894435 | orchestrator | Friday 19 September 2025 06:42:50 +0000 (0:00:00.230) 0:00:40.788 ****** 2025-09-19 06:43:13.894446 | orchestrator | ok: [testbed-manager] 2025-09-19 06:43:13.894456 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:43:13.894467 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:43:13.894477 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:43:13.894488 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:43:13.894499 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:43:13.894510 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:43:13.894520 | orchestrator | 2025-09-19 06:43:13.894531 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-19 06:43:13.894542 | orchestrator | Friday 19 September 2025 06:42:50 +0000 (0:00:00.245) 0:00:41.034 ****** 2025-09-19 06:43:13.894553 | orchestrator | ok: [testbed-manager] 2025-09-19 06:43:13.894596 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:43:13.894608 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:43:13.894619 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:43:13.894630 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:43:13.894651 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:43:13.894663 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:43:13.894674 | orchestrator | 2025-09-19 06:43:13.894685 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-19 06:43:13.894696 | orchestrator | Friday 19 September 2025 06:42:51 +0000 (0:00:00.240) 0:00:41.274 ****** 2025-09-19 06:43:13.894708 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:43:13.894722 | orchestrator | 2025-09-19 06:43:13.894733 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-19 06:43:13.894744 | orchestrator | Friday 19 September 2025 06:42:51 +0000 (0:00:00.303) 0:00:41.578 ****** 2025-09-19 06:43:13.894754 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:43:13.894765 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:43:13.894776 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:43:13.894787 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:43:13.894797 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:43:13.894808 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:43:13.894845 | orchestrator | ok: [testbed-manager] 2025-09-19 06:43:13.894856 | orchestrator | 2025-09-19 06:43:13.894867 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-19 06:43:13.894892 | orchestrator | Friday 19 September 2025 06:42:52 +0000 (0:00:01.496) 0:00:43.074 ****** 2025-09-19 06:43:13.894903 | orchestrator | changed: [testbed-manager] 2025-09-19 06:43:13.894913 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:43:13.894924 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:43:13.894935 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:43:13.894945 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:43:13.894956 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:43:13.894966 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:43:13.894977 | orchestrator | 2025-09-19 06:43:13.894988 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-19 06:43:13.894999 | orchestrator | Friday 19 September 2025 06:42:54 +0000 (0:00:01.116) 0:00:44.191 ****** 2025-09-19 06:43:13.895010 | orchestrator | ok: [testbed-manager] 2025-09-19 06:43:13.895021 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:43:13.895032 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:43:13.895043 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:43:13.895054 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:43:13.895064 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:43:13.895075 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:43:13.895085 | orchestrator | 2025-09-19 06:43:13.895096 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-19 06:43:13.895107 | orchestrator | Friday 19 September 2025 06:42:54 +0000 (0:00:00.806) 0:00:44.997 ****** 2025-09-19 06:43:13.895119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:43:13.895132 | orchestrator | 2025-09-19 06:43:13.895143 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-19 06:43:13.895154 | orchestrator | Friday 19 September 2025 06:42:55 +0000 (0:00:00.294) 0:00:45.292 ****** 2025-09-19 06:43:13.895165 | orchestrator | changed: [testbed-manager] 2025-09-19 06:43:13.895176 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:43:13.895186 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:43:13.895197 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:43:13.895208 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:43:13.895218 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:43:13.895229 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:43:13.895240 | orchestrator | 2025-09-19 06:43:13.895270 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-19 06:43:13.895281 | orchestrator | Friday 19 September 2025 06:42:56 +0000 (0:00:01.090) 0:00:46.382 ****** 2025-09-19 06:43:13.895292 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:43:13.895303 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:43:13.895314 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:43:13.895325 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:43:13.895335 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:43:13.895346 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:43:13.895356 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:43:13.895367 | orchestrator | 2025-09-19 06:43:13.895377 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-19 06:43:13.895388 | orchestrator | Friday 19 September 2025 06:42:56 +0000 (0:00:00.288) 0:00:46.670 ****** 2025-09-19 06:43:13.895399 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:43:13.895409 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:43:13.895420 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:43:13.895430 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:43:13.895441 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:43:13.895451 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:43:13.895462 | orchestrator | changed: [testbed-manager] 2025-09-19 06:43:13.895481 | orchestrator | 2025-09-19 06:43:13.895492 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-19 06:43:13.895503 | orchestrator | Friday 19 September 2025 06:43:08 +0000 (0:00:11.848) 0:00:58.519 ****** 2025-09-19 06:43:13.895513 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:43:13.895524 | orchestrator | ok: [testbed-manager] 2025-09-19 06:43:13.895534 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:43:13.895545 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:43:13.895556 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:43:13.895598 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:43:13.895610 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:43:13.895620 | orchestrator | 2025-09-19 06:43:13.895631 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-19 06:43:13.895642 | orchestrator | Friday 19 September 2025 06:43:09 +0000 (0:00:01.302) 0:00:59.821 ****** 2025-09-19 06:43:13.895653 | orchestrator | ok: [testbed-manager] 2025-09-19 06:43:13.895664 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:43:13.895674 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:43:13.895685 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:43:13.895695 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:43:13.895706 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:43:13.895716 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:43:13.895727 | orchestrator | 2025-09-19 06:43:13.895738 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-19 06:43:13.895748 | orchestrator | Friday 19 September 2025 06:43:10 +0000 (0:00:00.966) 0:01:00.788 ****** 2025-09-19 06:43:13.895759 | orchestrator | ok: [testbed-manager] 2025-09-19 06:43:13.895770 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:43:13.895780 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:43:13.895791 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:43:13.895801 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:43:13.895812 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:43:13.895823 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:43:13.895833 | orchestrator | 2025-09-19 06:43:13.895844 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-19 06:43:13.895855 | orchestrator | Friday 19 September 2025 06:43:10 +0000 (0:00:00.224) 0:01:01.012 ****** 2025-09-19 06:43:13.895865 | orchestrator | ok: [testbed-manager] 2025-09-19 06:43:13.895876 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:43:13.895886 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:43:13.895897 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:43:13.895907 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:43:13.895918 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:43:13.895928 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:43:13.895939 | orchestrator | 2025-09-19 06:43:13.895950 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-19 06:43:13.895960 | orchestrator | Friday 19 September 2025 06:43:11 +0000 (0:00:00.223) 0:01:01.236 ****** 2025-09-19 06:43:13.895972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:43:13.895983 | orchestrator | 2025-09-19 06:43:13.895994 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-19 06:43:13.896005 | orchestrator | Friday 19 September 2025 06:43:11 +0000 (0:00:00.279) 0:01:01.515 ****** 2025-09-19 06:43:13.896015 | orchestrator | ok: [testbed-manager] 2025-09-19 06:43:13.896026 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:43:13.896037 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:43:13.896047 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:43:13.896058 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:43:13.896068 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:43:13.896079 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:43:13.896090 | orchestrator | 2025-09-19 06:43:13.896101 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-19 06:43:13.896118 | orchestrator | Friday 19 September 2025 06:43:13 +0000 (0:00:01.685) 0:01:03.200 ****** 2025-09-19 06:43:13.896129 | orchestrator | changed: [testbed-manager] 2025-09-19 06:43:13.896140 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:43:13.896151 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:43:13.896161 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:43:13.896172 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:43:13.896183 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:43:13.896193 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:43:13.896204 | orchestrator | 2025-09-19 06:43:13.896215 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-19 06:43:13.896226 | orchestrator | Friday 19 September 2025 06:43:13 +0000 (0:00:00.587) 0:01:03.788 ****** 2025-09-19 06:43:13.896236 | orchestrator | ok: [testbed-manager] 2025-09-19 06:43:13.896247 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:43:13.896258 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:43:13.896268 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:43:13.896279 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:43:13.896289 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:43:13.896300 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:43:13.896310 | orchestrator | 2025-09-19 06:43:13.896328 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-19 06:45:31.145349 | orchestrator | Friday 19 September 2025 06:43:13 +0000 (0:00:00.233) 0:01:04.022 ****** 2025-09-19 06:45:31.145463 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:31.145479 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:31.145491 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:31.145502 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:31.145513 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:31.145524 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:31.145576 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:31.145587 | orchestrator | 2025-09-19 06:45:31.145600 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-19 06:45:31.145611 | orchestrator | Friday 19 September 2025 06:43:15 +0000 (0:00:01.280) 0:01:05.302 ****** 2025-09-19 06:45:31.145622 | orchestrator | changed: [testbed-manager] 2025-09-19 06:45:31.145634 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:45:31.145645 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:45:31.145656 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:45:31.145667 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:45:31.145677 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:45:31.145688 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:45:31.145699 | orchestrator | 2025-09-19 06:45:31.145710 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-19 06:45:31.145721 | orchestrator | Friday 19 September 2025 06:43:17 +0000 (0:00:02.040) 0:01:07.342 ****** 2025-09-19 06:45:31.145732 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:31.145743 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:31.145754 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:31.145765 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:31.145776 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:31.145805 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:31.145816 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:31.145827 | orchestrator | 2025-09-19 06:45:31.145839 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-19 06:45:31.145849 | orchestrator | Friday 19 September 2025 06:43:19 +0000 (0:00:02.532) 0:01:09.875 ****** 2025-09-19 06:45:31.145860 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:31.145871 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:31.145882 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:31.145895 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:31.145907 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:31.145919 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:31.145931 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:31.145943 | orchestrator | 2025-09-19 06:45:31.145956 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-19 06:45:31.145992 | orchestrator | Friday 19 September 2025 06:43:57 +0000 (0:00:37.725) 0:01:47.601 ****** 2025-09-19 06:45:31.146005 | orchestrator | changed: [testbed-manager] 2025-09-19 06:45:31.146074 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:45:31.146087 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:45:31.146100 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:45:31.146113 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:45:31.146125 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:45:31.146136 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:45:31.146146 | orchestrator | 2025-09-19 06:45:31.146157 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-19 06:45:31.146168 | orchestrator | Friday 19 September 2025 06:45:16 +0000 (0:01:18.662) 0:03:06.264 ****** 2025-09-19 06:45:31.146180 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:31.146191 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:31.146201 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:31.146212 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:31.146223 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:31.146234 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:31.146244 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:31.146255 | orchestrator | 2025-09-19 06:45:31.146266 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-19 06:45:31.146278 | orchestrator | Friday 19 September 2025 06:45:17 +0000 (0:00:01.764) 0:03:08.028 ****** 2025-09-19 06:45:31.146289 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:31.146299 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:31.146310 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:31.146326 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:31.146337 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:31.146348 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:31.146359 | orchestrator | changed: [testbed-manager] 2025-09-19 06:45:31.146369 | orchestrator | 2025-09-19 06:45:31.146380 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-19 06:45:31.146391 | orchestrator | Friday 19 September 2025 06:45:29 +0000 (0:00:12.023) 0:03:20.052 ****** 2025-09-19 06:45:31.146410 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-19 06:45:31.146426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-19 06:45:31.146462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-19 06:45:31.146483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-19 06:45:31.146503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-19 06:45:31.146515 | orchestrator | 2025-09-19 06:45:31.146526 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-19 06:45:31.146554 | orchestrator | Friday 19 September 2025 06:45:30 +0000 (0:00:00.413) 0:03:20.466 ****** 2025-09-19 06:45:31.146565 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 06:45:31.146576 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:45:31.146587 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 06:45:31.146598 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 06:45:31.146609 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:45:31.146619 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 06:45:31.146630 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:45:31.146641 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:45:31.146652 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 06:45:31.146662 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 06:45:31.146673 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 06:45:31.146683 | orchestrator | 2025-09-19 06:45:31.146694 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-19 06:45:31.146705 | orchestrator | Friday 19 September 2025 06:45:31 +0000 (0:00:00.686) 0:03:21.153 ****** 2025-09-19 06:45:31.146716 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 06:45:31.146728 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 06:45:31.146739 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 06:45:31.146749 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 06:45:31.146760 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 06:45:31.146775 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 06:45:31.146787 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 06:45:31.146797 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 06:45:31.146808 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 06:45:31.146818 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 06:45:31.146829 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 06:45:31.146840 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 06:45:31.146850 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 06:45:31.146861 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 06:45:31.146871 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 06:45:31.146882 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:45:31.146893 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 06:45:31.146910 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 06:45:31.146921 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 06:45:31.146931 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 06:45:31.146942 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 06:45:31.146959 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 06:45:36.580632 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 06:45:36.580722 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 06:45:36.580736 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 06:45:36.580749 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:45:36.580761 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 06:45:36.580772 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 06:45:36.580783 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 06:45:36.580794 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 06:45:36.580805 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 06:45:36.580816 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 06:45:36.580827 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 06:45:36.580838 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 06:45:36.580849 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 06:45:36.580860 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 06:45:36.580870 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 06:45:36.580881 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:45:36.580892 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 06:45:36.580903 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 06:45:36.580914 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 06:45:36.580925 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 06:45:36.580937 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 06:45:36.580948 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:45:36.580959 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 06:45:36.580970 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 06:45:36.580981 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 06:45:36.580992 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 06:45:36.581003 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 06:45:36.581014 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 06:45:36.581025 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 06:45:36.581058 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 06:45:36.581069 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 06:45:36.581080 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 06:45:36.581090 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 06:45:36.581101 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 06:45:36.581112 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 06:45:36.581122 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 06:45:36.581133 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 06:45:36.581144 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 06:45:36.581154 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 06:45:36.581165 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 06:45:36.581176 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 06:45:36.581186 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 06:45:36.581197 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 06:45:36.581223 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 06:45:36.581235 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 06:45:36.581246 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 06:45:36.581257 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 06:45:36.581267 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 06:45:36.581278 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 06:45:36.581289 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 06:45:36.581300 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 06:45:36.581310 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 06:45:36.581321 | orchestrator | 2025-09-19 06:45:36.581333 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-19 06:45:36.581343 | orchestrator | Friday 19 September 2025 06:45:34 +0000 (0:00:03.796) 0:03:24.949 ****** 2025-09-19 06:45:36.581354 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 06:45:36.581365 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 06:45:36.581376 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 06:45:36.581386 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 06:45:36.581397 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 06:45:36.581408 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 06:45:36.581419 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 06:45:36.581429 | orchestrator | 2025-09-19 06:45:36.581440 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-19 06:45:36.581458 | orchestrator | Friday 19 September 2025 06:45:35 +0000 (0:00:00.535) 0:03:25.485 ****** 2025-09-19 06:45:36.581469 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 06:45:36.581480 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:45:36.581505 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 06:45:36.581516 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:45:36.581551 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 06:45:36.581562 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:45:36.581573 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 06:45:36.581584 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:45:36.581595 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 06:45:36.581610 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 06:45:36.581621 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 06:45:36.581632 | orchestrator | 2025-09-19 06:45:36.581643 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-19 06:45:36.581653 | orchestrator | Friday 19 September 2025 06:45:35 +0000 (0:00:00.473) 0:03:25.958 ****** 2025-09-19 06:45:36.581664 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 06:45:36.581675 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:45:36.581685 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 06:45:36.581696 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 06:45:36.581707 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:45:36.581717 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:45:36.581728 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 06:45:36.581739 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:45:36.581749 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 06:45:36.581760 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 06:45:36.581771 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 06:45:36.581781 | orchestrator | 2025-09-19 06:45:36.581792 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-19 06:45:36.581803 | orchestrator | Friday 19 September 2025 06:45:36 +0000 (0:00:00.528) 0:03:26.486 ****** 2025-09-19 06:45:36.581814 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:45:36.581824 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:45:36.581837 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:45:36.581856 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:45:36.581875 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:45:36.581904 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:45:48.194234 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:45:48.194346 | orchestrator | 2025-09-19 06:45:48.194363 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-19 06:45:48.194376 | orchestrator | Friday 19 September 2025 06:45:36 +0000 (0:00:00.230) 0:03:26.717 ****** 2025-09-19 06:45:48.194388 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:48.194401 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:48.194412 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:48.194423 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:48.194461 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:48.194473 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:48.194483 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:48.194494 | orchestrator | 2025-09-19 06:45:48.194505 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-19 06:45:48.194516 | orchestrator | Friday 19 September 2025 06:45:42 +0000 (0:00:05.723) 0:03:32.441 ****** 2025-09-19 06:45:48.194573 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-19 06:45:48.194593 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-19 06:45:48.194611 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:45:48.194629 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-19 06:45:48.194648 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:45:48.194666 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-19 06:45:48.194684 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:45:48.194700 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-19 06:45:48.194711 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:45:48.194721 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-19 06:45:48.194732 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:45:48.194746 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:45:48.194757 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-19 06:45:48.194770 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:45:48.194782 | orchestrator | 2025-09-19 06:45:48.194794 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-19 06:45:48.194807 | orchestrator | Friday 19 September 2025 06:45:42 +0000 (0:00:00.272) 0:03:32.713 ****** 2025-09-19 06:45:48.194820 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-19 06:45:48.194832 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-19 06:45:48.194843 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-19 06:45:48.194853 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-19 06:45:48.194864 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-19 06:45:48.194875 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-19 06:45:48.194885 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-19 06:45:48.194896 | orchestrator | 2025-09-19 06:45:48.194906 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-19 06:45:48.194917 | orchestrator | Friday 19 September 2025 06:45:43 +0000 (0:00:01.063) 0:03:33.777 ****** 2025-09-19 06:45:48.194930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:45:48.194943 | orchestrator | 2025-09-19 06:45:48.194954 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-19 06:45:48.194964 | orchestrator | Friday 19 September 2025 06:45:44 +0000 (0:00:00.525) 0:03:34.302 ****** 2025-09-19 06:45:48.194975 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:48.194986 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:48.194997 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:48.195007 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:48.195018 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:48.195028 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:48.195039 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:48.195050 | orchestrator | 2025-09-19 06:45:48.195076 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-19 06:45:48.195087 | orchestrator | Friday 19 September 2025 06:45:45 +0000 (0:00:01.245) 0:03:35.548 ****** 2025-09-19 06:45:48.195098 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:48.195108 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:48.195119 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:48.195129 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:48.195140 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:48.195150 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:48.195171 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:48.195182 | orchestrator | 2025-09-19 06:45:48.195193 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-19 06:45:48.195204 | orchestrator | Friday 19 September 2025 06:45:46 +0000 (0:00:00.614) 0:03:36.162 ****** 2025-09-19 06:45:48.195214 | orchestrator | changed: [testbed-manager] 2025-09-19 06:45:48.195225 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:45:48.195236 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:45:48.195246 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:45:48.195257 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:45:48.195267 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:45:48.195278 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:45:48.195288 | orchestrator | 2025-09-19 06:45:48.195299 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-19 06:45:48.195310 | orchestrator | Friday 19 September 2025 06:45:46 +0000 (0:00:00.633) 0:03:36.796 ****** 2025-09-19 06:45:48.195321 | orchestrator | ok: [testbed-manager] 2025-09-19 06:45:48.195331 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:45:48.195342 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:45:48.195353 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:45:48.195363 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:45:48.195374 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:45:48.195385 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:45:48.195395 | orchestrator | 2025-09-19 06:45:48.195406 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-19 06:45:48.195417 | orchestrator | Friday 19 September 2025 06:45:47 +0000 (0:00:00.591) 0:03:37.387 ****** 2025-09-19 06:45:48.195450 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758262956.4419715, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:48.195466 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758262959.0388467, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:48.195478 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758262988.643618, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:48.195489 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758262975.3862371, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:48.195506 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758262989.0902436, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:48.195543 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758262969.7258322, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:48.195555 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758262982.1660483, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:45:48.195585 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:46:05.126562 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:46:05.126682 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:46:05.126700 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:46:05.126738 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:46:05.126751 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:46:05.126762 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 06:46:05.126775 | orchestrator | 2025-09-19 06:46:05.126788 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-19 06:46:05.126801 | orchestrator | Friday 19 September 2025 06:45:48 +0000 (0:00:00.932) 0:03:38.320 ****** 2025-09-19 06:46:05.126813 | orchestrator | changed: [testbed-manager] 2025-09-19 06:46:05.126825 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:46:05.126836 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:46:05.126847 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:46:05.126857 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:46:05.126868 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:46:05.126879 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:46:05.126890 | orchestrator | 2025-09-19 06:46:05.126901 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-19 06:46:05.126911 | orchestrator | Friday 19 September 2025 06:45:49 +0000 (0:00:01.145) 0:03:39.465 ****** 2025-09-19 06:46:05.126923 | orchestrator | changed: [testbed-manager] 2025-09-19 06:46:05.126933 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:46:05.126944 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:46:05.126955 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:46:05.126982 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:46:05.126994 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:46:05.127004 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:46:05.127015 | orchestrator | 2025-09-19 06:46:05.127026 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-19 06:46:05.127037 | orchestrator | Friday 19 September 2025 06:45:50 +0000 (0:00:01.192) 0:03:40.657 ****** 2025-09-19 06:46:05.127048 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:46:05.127079 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:46:05.127092 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:46:05.127105 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:46:05.127118 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:46:05.127130 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:46:05.127141 | orchestrator | changed: [testbed-manager] 2025-09-19 06:46:05.127152 | orchestrator | 2025-09-19 06:46:05.127163 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-19 06:46:05.127174 | orchestrator | Friday 19 September 2025 06:45:52 +0000 (0:00:01.879) 0:03:42.537 ****** 2025-09-19 06:46:05.127193 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:46:05.127204 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:46:05.127215 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:46:05.127225 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:46:05.127236 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:46:05.127247 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:46:05.127258 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:46:05.127268 | orchestrator | 2025-09-19 06:46:05.127279 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-19 06:46:05.127291 | orchestrator | Friday 19 September 2025 06:45:52 +0000 (0:00:00.291) 0:03:42.829 ****** 2025-09-19 06:46:05.127302 | orchestrator | ok: [testbed-manager] 2025-09-19 06:46:05.127314 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:46:05.127324 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:46:05.127335 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:46:05.127346 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:46:05.127357 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:46:05.127368 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:46:05.127378 | orchestrator | 2025-09-19 06:46:05.127389 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-19 06:46:05.127400 | orchestrator | Friday 19 September 2025 06:45:53 +0000 (0:00:00.762) 0:03:43.591 ****** 2025-09-19 06:46:05.127412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:46:05.127425 | orchestrator | 2025-09-19 06:46:05.127436 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-19 06:46:05.127447 | orchestrator | Friday 19 September 2025 06:45:53 +0000 (0:00:00.396) 0:03:43.987 ****** 2025-09-19 06:46:05.127458 | orchestrator | ok: [testbed-manager] 2025-09-19 06:46:05.127469 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:46:05.127480 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:46:05.127490 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:46:05.127501 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:46:05.127512 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:46:05.127542 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:46:05.127554 | orchestrator | 2025-09-19 06:46:05.127565 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-19 06:46:05.127576 | orchestrator | Friday 19 September 2025 06:46:01 +0000 (0:00:07.911) 0:03:51.899 ****** 2025-09-19 06:46:05.127592 | orchestrator | ok: [testbed-manager] 2025-09-19 06:46:05.127603 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:46:05.127614 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:46:05.127625 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:46:05.127636 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:46:05.127646 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:46:05.127657 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:46:05.127668 | orchestrator | 2025-09-19 06:46:05.127680 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-19 06:46:05.127690 | orchestrator | Friday 19 September 2025 06:46:03 +0000 (0:00:01.371) 0:03:53.270 ****** 2025-09-19 06:46:05.127701 | orchestrator | ok: [testbed-manager] 2025-09-19 06:46:05.127712 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:46:05.127723 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:46:05.127734 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:46:05.127744 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:46:05.127755 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:46:05.127766 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:46:05.127776 | orchestrator | 2025-09-19 06:46:05.127787 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-19 06:46:05.127798 | orchestrator | Friday 19 September 2025 06:46:04 +0000 (0:00:01.030) 0:03:54.301 ****** 2025-09-19 06:46:05.127809 | orchestrator | ok: [testbed-manager] 2025-09-19 06:46:05.127826 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:46:05.127837 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:46:05.127848 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:46:05.127859 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:46:05.127869 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:46:05.127880 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:46:05.127891 | orchestrator | 2025-09-19 06:46:05.127902 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-19 06:46:05.127913 | orchestrator | Friday 19 September 2025 06:46:04 +0000 (0:00:00.288) 0:03:54.589 ****** 2025-09-19 06:46:05.127924 | orchestrator | ok: [testbed-manager] 2025-09-19 06:46:05.127935 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:46:05.127945 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:46:05.127956 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:46:05.127967 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:46:05.127977 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:46:05.127988 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:46:05.127999 | orchestrator | 2025-09-19 06:46:05.128010 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-19 06:46:05.128020 | orchestrator | Friday 19 September 2025 06:46:04 +0000 (0:00:00.398) 0:03:54.987 ****** 2025-09-19 06:46:05.128031 | orchestrator | ok: [testbed-manager] 2025-09-19 06:46:05.128042 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:46:05.128053 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:46:05.128063 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:46:05.128074 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:46:05.128101 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:47:16.041465 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:47:16.041612 | orchestrator | 2025-09-19 06:47:16.041640 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-19 06:47:16.041662 | orchestrator | Friday 19 September 2025 06:46:05 +0000 (0:00:00.272) 0:03:55.259 ****** 2025-09-19 06:47:16.041671 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:16.041681 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:47:16.041690 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:47:16.041699 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:47:16.041708 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:47:16.041717 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:47:16.041726 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:47:16.041734 | orchestrator | 2025-09-19 06:47:16.041743 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-19 06:47:16.041753 | orchestrator | Friday 19 September 2025 06:46:10 +0000 (0:00:05.824) 0:04:01.083 ****** 2025-09-19 06:47:16.041764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:47:16.041775 | orchestrator | 2025-09-19 06:47:16.041785 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-19 06:47:16.041793 | orchestrator | Friday 19 September 2025 06:46:11 +0000 (0:00:00.386) 0:04:01.470 ****** 2025-09-19 06:47:16.041803 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-19 06:47:16.041812 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-19 06:47:16.041821 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-19 06:47:16.041830 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:47:16.041849 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-19 06:47:16.041866 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-19 06:47:16.041875 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-19 06:47:16.041884 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:47:16.041893 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-19 06:47:16.041901 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-19 06:47:16.041910 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:47:16.041941 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-19 06:47:16.041951 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-19 06:47:16.041959 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:47:16.041968 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-19 06:47:16.041977 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:47:16.041986 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-19 06:47:16.041994 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:47:16.042005 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-19 06:47:16.042061 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-19 06:47:16.042072 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:47:16.042082 | orchestrator | 2025-09-19 06:47:16.042093 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-19 06:47:16.042104 | orchestrator | Friday 19 September 2025 06:46:11 +0000 (0:00:00.325) 0:04:01.796 ****** 2025-09-19 06:47:16.042128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:47:16.042140 | orchestrator | 2025-09-19 06:47:16.042151 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-19 06:47:16.042162 | orchestrator | Friday 19 September 2025 06:46:12 +0000 (0:00:00.391) 0:04:02.187 ****** 2025-09-19 06:47:16.042183 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-19 06:47:16.042202 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:47:16.042212 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-19 06:47:16.042223 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-19 06:47:16.042233 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:47:16.042243 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-19 06:47:16.042253 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:47:16.042263 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-19 06:47:16.042273 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:47:16.042284 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-19 06:47:16.042294 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:47:16.042304 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:47:16.042314 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-19 06:47:16.042325 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:47:16.042335 | orchestrator | 2025-09-19 06:47:16.042345 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-19 06:47:16.042355 | orchestrator | Friday 19 September 2025 06:46:12 +0000 (0:00:00.317) 0:04:02.505 ****** 2025-09-19 06:47:16.042366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:47:16.042376 | orchestrator | 2025-09-19 06:47:16.042387 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-19 06:47:16.042397 | orchestrator | Friday 19 September 2025 06:46:12 +0000 (0:00:00.416) 0:04:02.921 ****** 2025-09-19 06:47:16.042407 | orchestrator | changed: [testbed-manager] 2025-09-19 06:47:16.042435 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:47:16.042446 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:47:16.042457 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:47:16.042467 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:47:16.042477 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:47:16.042488 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:47:16.042497 | orchestrator | 2025-09-19 06:47:16.042535 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-19 06:47:16.042546 | orchestrator | Friday 19 September 2025 06:46:47 +0000 (0:00:35.071) 0:04:37.993 ****** 2025-09-19 06:47:16.042557 | orchestrator | changed: [testbed-manager] 2025-09-19 06:47:16.042567 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:47:16.042576 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:47:16.042587 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:47:16.042597 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:47:16.042607 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:47:16.042617 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:47:16.042627 | orchestrator | 2025-09-19 06:47:16.042637 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-19 06:47:16.042647 | orchestrator | Friday 19 September 2025 06:46:56 +0000 (0:00:08.249) 0:04:46.242 ****** 2025-09-19 06:47:16.042657 | orchestrator | changed: [testbed-manager] 2025-09-19 06:47:16.042667 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:47:16.042678 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:47:16.042688 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:47:16.042698 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:47:16.042708 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:47:16.042717 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:47:16.042728 | orchestrator | 2025-09-19 06:47:16.042738 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-19 06:47:16.042748 | orchestrator | Friday 19 September 2025 06:47:03 +0000 (0:00:07.646) 0:04:53.889 ****** 2025-09-19 06:47:16.042758 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:16.042768 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:47:16.042778 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:47:16.042788 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:47:16.042798 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:47:16.042809 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:47:16.042818 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:47:16.042828 | orchestrator | 2025-09-19 06:47:16.042839 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-19 06:47:16.042850 | orchestrator | Friday 19 September 2025 06:47:05 +0000 (0:00:01.686) 0:04:55.575 ****** 2025-09-19 06:47:16.042860 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:47:16.042869 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:47:16.042879 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:47:16.042889 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:47:16.042899 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:47:16.042910 | orchestrator | changed: [testbed-manager] 2025-09-19 06:47:16.042920 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:47:16.042929 | orchestrator | 2025-09-19 06:47:16.042940 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-19 06:47:16.042950 | orchestrator | Friday 19 September 2025 06:47:11 +0000 (0:00:06.075) 0:05:01.651 ****** 2025-09-19 06:47:16.042961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:47:16.042974 | orchestrator | 2025-09-19 06:47:16.042988 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-19 06:47:16.042999 | orchestrator | Friday 19 September 2025 06:47:12 +0000 (0:00:00.891) 0:05:02.543 ****** 2025-09-19 06:47:16.043009 | orchestrator | changed: [testbed-manager] 2025-09-19 06:47:16.043019 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:47:16.043029 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:47:16.043039 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:47:16.043049 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:47:16.043059 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:47:16.043068 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:47:16.043079 | orchestrator | 2025-09-19 06:47:16.043089 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-19 06:47:16.043106 | orchestrator | Friday 19 September 2025 06:47:13 +0000 (0:00:00.748) 0:05:03.291 ****** 2025-09-19 06:47:16.043116 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:16.043126 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:47:16.043137 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:47:16.043147 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:47:16.043157 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:47:16.043167 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:47:16.043177 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:47:16.043187 | orchestrator | 2025-09-19 06:47:16.043198 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-19 06:47:16.043208 | orchestrator | Friday 19 September 2025 06:47:14 +0000 (0:00:01.772) 0:05:05.064 ****** 2025-09-19 06:47:16.043218 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:47:16.043228 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:47:16.043238 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:47:16.043248 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:47:16.043258 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:47:16.043268 | orchestrator | changed: [testbed-manager] 2025-09-19 06:47:16.043278 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:47:16.043288 | orchestrator | 2025-09-19 06:47:16.043298 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-19 06:47:16.043309 | orchestrator | Friday 19 September 2025 06:47:15 +0000 (0:00:00.817) 0:05:05.882 ****** 2025-09-19 06:47:16.043319 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:47:16.043329 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:47:16.043339 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:47:16.043348 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:47:16.043358 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:47:16.043368 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:47:16.043379 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:47:16.043389 | orchestrator | 2025-09-19 06:47:16.043398 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-19 06:47:16.043415 | orchestrator | Friday 19 September 2025 06:47:16 +0000 (0:00:00.286) 0:05:06.169 ****** 2025-09-19 06:47:43.445053 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:47:43.445159 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:47:43.445172 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:47:43.445182 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:47:43.445192 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:47:43.445202 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:47:43.445211 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:47:43.445221 | orchestrator | 2025-09-19 06:47:43.445232 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-19 06:47:43.445243 | orchestrator | Friday 19 September 2025 06:47:16 +0000 (0:00:00.426) 0:05:06.595 ****** 2025-09-19 06:47:43.445253 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:43.445264 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:47:43.445273 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:47:43.445283 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:47:43.445292 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:47:43.445302 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:47:43.445311 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:47:43.445321 | orchestrator | 2025-09-19 06:47:43.445331 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-19 06:47:43.445340 | orchestrator | Friday 19 September 2025 06:47:16 +0000 (0:00:00.296) 0:05:06.892 ****** 2025-09-19 06:47:43.445350 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:47:43.445360 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:47:43.445370 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:47:43.445379 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:47:43.445389 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:47:43.445398 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:47:43.445407 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:47:43.445441 | orchestrator | 2025-09-19 06:47:43.445451 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-19 06:47:43.445461 | orchestrator | Friday 19 September 2025 06:47:17 +0000 (0:00:00.292) 0:05:07.184 ****** 2025-09-19 06:47:43.445470 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:43.445480 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:47:43.445489 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:47:43.445499 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:47:43.445508 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:47:43.445550 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:47:43.445561 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:47:43.445571 | orchestrator | 2025-09-19 06:47:43.445580 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-19 06:47:43.445590 | orchestrator | Friday 19 September 2025 06:47:17 +0000 (0:00:00.299) 0:05:07.483 ****** 2025-09-19 06:47:43.445601 | orchestrator | ok: [testbed-manager] =>  2025-09-19 06:47:43.445612 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 06:47:43.445623 | orchestrator | ok: [testbed-node-3] =>  2025-09-19 06:47:43.445634 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 06:47:43.445645 | orchestrator | ok: [testbed-node-4] =>  2025-09-19 06:47:43.445655 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 06:47:43.445666 | orchestrator | ok: [testbed-node-5] =>  2025-09-19 06:47:43.445677 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 06:47:43.445688 | orchestrator | ok: [testbed-node-0] =>  2025-09-19 06:47:43.445699 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 06:47:43.445709 | orchestrator | ok: [testbed-node-1] =>  2025-09-19 06:47:43.445720 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 06:47:43.445731 | orchestrator | ok: [testbed-node-2] =>  2025-09-19 06:47:43.445742 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 06:47:43.445752 | orchestrator | 2025-09-19 06:47:43.445763 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-19 06:47:43.445774 | orchestrator | Friday 19 September 2025 06:47:17 +0000 (0:00:00.304) 0:05:07.788 ****** 2025-09-19 06:47:43.445786 | orchestrator | ok: [testbed-manager] =>  2025-09-19 06:47:43.445797 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 06:47:43.445808 | orchestrator | ok: [testbed-node-3] =>  2025-09-19 06:47:43.445820 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 06:47:43.445830 | orchestrator | ok: [testbed-node-4] =>  2025-09-19 06:47:43.445840 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 06:47:43.445849 | orchestrator | ok: [testbed-node-5] =>  2025-09-19 06:47:43.445859 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 06:47:43.445868 | orchestrator | ok: [testbed-node-0] =>  2025-09-19 06:47:43.445878 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 06:47:43.445887 | orchestrator | ok: [testbed-node-1] =>  2025-09-19 06:47:43.445897 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 06:47:43.445906 | orchestrator | ok: [testbed-node-2] =>  2025-09-19 06:47:43.445916 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 06:47:43.445925 | orchestrator | 2025-09-19 06:47:43.445935 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-19 06:47:43.445945 | orchestrator | Friday 19 September 2025 06:47:17 +0000 (0:00:00.282) 0:05:08.070 ****** 2025-09-19 06:47:43.445954 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:47:43.445964 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:47:43.445973 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:47:43.445983 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:47:43.445992 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:47:43.446002 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:47:43.446011 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:47:43.446067 | orchestrator | 2025-09-19 06:47:43.446077 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-19 06:47:43.446087 | orchestrator | Friday 19 September 2025 06:47:18 +0000 (0:00:00.265) 0:05:08.336 ****** 2025-09-19 06:47:43.446097 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:47:43.446114 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:47:43.446123 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:47:43.446133 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:47:43.446142 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:47:43.446152 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:47:43.446162 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:47:43.446171 | orchestrator | 2025-09-19 06:47:43.446181 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-19 06:47:43.446190 | orchestrator | Friday 19 September 2025 06:47:18 +0000 (0:00:00.282) 0:05:08.618 ****** 2025-09-19 06:47:43.446216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:47:43.446229 | orchestrator | 2025-09-19 06:47:43.446239 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-19 06:47:43.446248 | orchestrator | Friday 19 September 2025 06:47:18 +0000 (0:00:00.411) 0:05:09.029 ****** 2025-09-19 06:47:43.446258 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:43.446267 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:47:43.446277 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:47:43.446287 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:47:43.446296 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:47:43.446306 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:47:43.446315 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:47:43.446325 | orchestrator | 2025-09-19 06:47:43.446335 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-19 06:47:43.446344 | orchestrator | Friday 19 September 2025 06:47:19 +0000 (0:00:00.866) 0:05:09.896 ****** 2025-09-19 06:47:43.446354 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:43.446364 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:47:43.446389 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:47:43.446399 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:47:43.446409 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:47:43.446418 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:47:43.446428 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:47:43.446437 | orchestrator | 2025-09-19 06:47:43.446447 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-19 06:47:43.446458 | orchestrator | Friday 19 September 2025 06:47:22 +0000 (0:00:03.231) 0:05:13.128 ****** 2025-09-19 06:47:43.446467 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-19 06:47:43.446477 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-19 06:47:43.446487 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-19 06:47:43.446496 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-19 06:47:43.446506 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-19 06:47:43.446515 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-19 06:47:43.446554 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:47:43.446563 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-19 06:47:43.446573 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-19 06:47:43.446582 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-19 06:47:43.446592 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:47:43.446601 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-19 06:47:43.446611 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-19 06:47:43.446633 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:47:43.446652 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-19 06:47:43.446662 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-19 06:47:43.446671 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-19 06:47:43.446681 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-19 06:47:43.446697 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:47:43.446707 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-19 06:47:43.446717 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-19 06:47:43.446726 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:47:43.446735 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-19 06:47:43.446745 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:47:43.446759 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-19 06:47:43.446769 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-19 06:47:43.446779 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-19 06:47:43.446788 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:47:43.446798 | orchestrator | 2025-09-19 06:47:43.446807 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-19 06:47:43.446817 | orchestrator | Friday 19 September 2025 06:47:23 +0000 (0:00:00.578) 0:05:13.706 ****** 2025-09-19 06:47:43.446826 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:43.446836 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:47:43.446845 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:47:43.446855 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:47:43.446864 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:47:43.446874 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:47:43.446883 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:47:43.446893 | orchestrator | 2025-09-19 06:47:43.446903 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-19 06:47:43.446912 | orchestrator | Friday 19 September 2025 06:47:30 +0000 (0:00:06.924) 0:05:20.630 ****** 2025-09-19 06:47:43.446922 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:47:43.446931 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:43.446941 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:47:43.446950 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:47:43.446960 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:47:43.446969 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:47:43.446979 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:47:43.446988 | orchestrator | 2025-09-19 06:47:43.446998 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-19 06:47:43.447007 | orchestrator | Friday 19 September 2025 06:47:31 +0000 (0:00:01.265) 0:05:21.896 ****** 2025-09-19 06:47:43.447017 | orchestrator | ok: [testbed-manager] 2025-09-19 06:47:43.447027 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:47:43.447036 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:47:43.447046 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:47:43.447055 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:47:43.447064 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:47:43.447074 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:47:43.447083 | orchestrator | 2025-09-19 06:47:43.447093 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-19 06:47:43.447103 | orchestrator | Friday 19 September 2025 06:47:40 +0000 (0:00:08.390) 0:05:30.286 ****** 2025-09-19 06:47:43.447112 | orchestrator | changed: [testbed-manager] 2025-09-19 06:47:43.447122 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:47:43.447131 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:47:43.447147 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:29.298286 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:29.298401 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:29.298416 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:29.298429 | orchestrator | 2025-09-19 06:48:29.298442 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-19 06:48:29.298455 | orchestrator | Friday 19 September 2025 06:47:43 +0000 (0:00:03.289) 0:05:33.576 ****** 2025-09-19 06:48:29.298466 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:29.298478 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:29.298489 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:29.298601 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:29.298615 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:29.298626 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:29.298637 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:29.298647 | orchestrator | 2025-09-19 06:48:29.298659 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-19 06:48:29.298670 | orchestrator | Friday 19 September 2025 06:47:44 +0000 (0:00:01.343) 0:05:34.920 ****** 2025-09-19 06:48:29.298681 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:29.298691 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:29.298702 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:29.298713 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:29.298724 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:29.298735 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:29.298745 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:29.298756 | orchestrator | 2025-09-19 06:48:29.298767 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-19 06:48:29.298778 | orchestrator | Friday 19 September 2025 06:47:46 +0000 (0:00:01.322) 0:05:36.242 ****** 2025-09-19 06:48:29.298789 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:29.298799 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:29.298810 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:29.298821 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:29.298831 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:29.298842 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:29.298853 | orchestrator | changed: [testbed-manager] 2025-09-19 06:48:29.298864 | orchestrator | 2025-09-19 06:48:29.298875 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-19 06:48:29.298886 | orchestrator | Friday 19 September 2025 06:47:46 +0000 (0:00:00.806) 0:05:37.049 ****** 2025-09-19 06:48:29.298896 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:29.298908 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:29.298919 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:29.298929 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:29.298940 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:29.298951 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:29.298961 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:29.298972 | orchestrator | 2025-09-19 06:48:29.298983 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-19 06:48:29.298994 | orchestrator | Friday 19 September 2025 06:47:57 +0000 (0:00:10.300) 0:05:47.350 ****** 2025-09-19 06:48:29.299005 | orchestrator | changed: [testbed-manager] 2025-09-19 06:48:29.299015 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:29.299026 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:29.299037 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:29.299047 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:29.299058 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:29.299069 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:29.299079 | orchestrator | 2025-09-19 06:48:29.299090 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-19 06:48:29.299117 | orchestrator | Friday 19 September 2025 06:47:58 +0000 (0:00:00.897) 0:05:48.248 ****** 2025-09-19 06:48:29.299128 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:29.299139 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:29.299149 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:29.299160 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:29.299171 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:29.299181 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:29.299192 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:29.299203 | orchestrator | 2025-09-19 06:48:29.299214 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-19 06:48:29.299224 | orchestrator | Friday 19 September 2025 06:48:07 +0000 (0:00:09.129) 0:05:57.378 ****** 2025-09-19 06:48:29.299243 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:29.299254 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:29.299264 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:29.299275 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:29.299286 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:29.299297 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:29.299307 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:29.299318 | orchestrator | 2025-09-19 06:48:29.299330 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-19 06:48:29.299341 | orchestrator | Friday 19 September 2025 06:48:18 +0000 (0:00:11.171) 0:06:08.549 ****** 2025-09-19 06:48:29.299352 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-19 06:48:29.299363 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-19 06:48:29.299374 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-19 06:48:29.299384 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-19 06:48:29.299395 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-19 06:48:29.299406 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-19 06:48:29.299417 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-19 06:48:29.299427 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-19 06:48:29.299438 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-19 06:48:29.299449 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-19 06:48:29.299459 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-19 06:48:29.299470 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-19 06:48:29.299481 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-19 06:48:29.299492 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-19 06:48:29.299502 | orchestrator | 2025-09-19 06:48:29.299513 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-19 06:48:29.299563 | orchestrator | Friday 19 September 2025 06:48:19 +0000 (0:00:01.243) 0:06:09.792 ****** 2025-09-19 06:48:29.299576 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:29.299587 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:29.299598 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:29.299609 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:29.299620 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:29.299631 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:29.299641 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:29.299652 | orchestrator | 2025-09-19 06:48:29.299663 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-19 06:48:29.299674 | orchestrator | Friday 19 September 2025 06:48:20 +0000 (0:00:00.544) 0:06:10.336 ****** 2025-09-19 06:48:29.299685 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:29.299695 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:29.299706 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:29.299717 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:29.299727 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:29.299738 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:29.299749 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:29.299759 | orchestrator | 2025-09-19 06:48:29.299770 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-19 06:48:29.299782 | orchestrator | Friday 19 September 2025 06:48:24 +0000 (0:00:04.129) 0:06:14.466 ****** 2025-09-19 06:48:29.299793 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:29.299804 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:29.299815 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:29.299825 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:29.299836 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:29.299846 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:29.299857 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:29.299875 | orchestrator | 2025-09-19 06:48:29.299887 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-19 06:48:29.299899 | orchestrator | Friday 19 September 2025 06:48:24 +0000 (0:00:00.591) 0:06:15.057 ****** 2025-09-19 06:48:29.299910 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-19 06:48:29.299921 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-19 06:48:29.299932 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:29.299943 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-19 06:48:29.299953 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-19 06:48:29.299964 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:29.299975 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-19 06:48:29.299986 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-19 06:48:29.299996 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:29.300007 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-19 06:48:29.300018 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-19 06:48:29.300029 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:29.300039 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-19 06:48:29.300050 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-19 06:48:29.300061 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:29.300071 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-19 06:48:29.300088 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-19 06:48:29.300099 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:29.300109 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-19 06:48:29.300120 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-19 06:48:29.300131 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:29.300142 | orchestrator | 2025-09-19 06:48:29.300153 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-19 06:48:29.300164 | orchestrator | Friday 19 September 2025 06:48:25 +0000 (0:00:00.841) 0:06:15.898 ****** 2025-09-19 06:48:29.300175 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:29.300185 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:29.300196 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:29.300207 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:29.300218 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:29.300228 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:29.300239 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:29.300250 | orchestrator | 2025-09-19 06:48:29.300261 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-19 06:48:29.300272 | orchestrator | Friday 19 September 2025 06:48:26 +0000 (0:00:00.570) 0:06:16.469 ****** 2025-09-19 06:48:29.300282 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:29.300293 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:29.300304 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:29.300315 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:29.300326 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:29.300336 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:29.300347 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:29.300358 | orchestrator | 2025-09-19 06:48:29.300369 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-19 06:48:29.300380 | orchestrator | Friday 19 September 2025 06:48:26 +0000 (0:00:00.511) 0:06:16.981 ****** 2025-09-19 06:48:29.300391 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:29.300402 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:29.300412 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:48:29.300423 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:48:29.300434 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:48:29.300451 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:48:29.300462 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:48:29.300473 | orchestrator | 2025-09-19 06:48:29.300484 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-19 06:48:29.300495 | orchestrator | Friday 19 September 2025 06:48:27 +0000 (0:00:00.554) 0:06:17.535 ****** 2025-09-19 06:48:29.300506 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:29.300523 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:51.282231 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:51.282337 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:51.282350 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:51.282361 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:51.282371 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:51.282381 | orchestrator | 2025-09-19 06:48:51.282392 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-19 06:48:51.282404 | orchestrator | Friday 19 September 2025 06:48:29 +0000 (0:00:01.895) 0:06:19.431 ****** 2025-09-19 06:48:51.282415 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:48:51.282427 | orchestrator | 2025-09-19 06:48:51.282437 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-19 06:48:51.282447 | orchestrator | Friday 19 September 2025 06:48:30 +0000 (0:00:01.014) 0:06:20.446 ****** 2025-09-19 06:48:51.282458 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:51.282474 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:51.282492 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:51.282508 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:51.282563 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:51.282582 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:51.282598 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:51.282613 | orchestrator | 2025-09-19 06:48:51.282629 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-19 06:48:51.282645 | orchestrator | Friday 19 September 2025 06:48:31 +0000 (0:00:00.869) 0:06:21.315 ****** 2025-09-19 06:48:51.282661 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:51.282677 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:51.282694 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:51.282711 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:51.282729 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:51.282747 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:51.282763 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:51.282779 | orchestrator | 2025-09-19 06:48:51.282799 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-19 06:48:51.282817 | orchestrator | Friday 19 September 2025 06:48:31 +0000 (0:00:00.821) 0:06:22.136 ****** 2025-09-19 06:48:51.282833 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:51.282849 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:51.282866 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:51.282883 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:51.282899 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:51.282916 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:51.282932 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:51.282949 | orchestrator | 2025-09-19 06:48:51.282966 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-19 06:48:51.282985 | orchestrator | Friday 19 September 2025 06:48:33 +0000 (0:00:01.314) 0:06:23.451 ****** 2025-09-19 06:48:51.283001 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:51.283016 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:51.283033 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:51.283052 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:51.283072 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:51.283091 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:51.283141 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:51.283153 | orchestrator | 2025-09-19 06:48:51.283164 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-19 06:48:51.283191 | orchestrator | Friday 19 September 2025 06:48:34 +0000 (0:00:01.529) 0:06:24.980 ****** 2025-09-19 06:48:51.283202 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:51.283213 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:51.283224 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:51.283235 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:51.283245 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:51.283256 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:51.283267 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:51.283278 | orchestrator | 2025-09-19 06:48:51.283289 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-19 06:48:51.283300 | orchestrator | Friday 19 September 2025 06:48:36 +0000 (0:00:01.324) 0:06:26.304 ****** 2025-09-19 06:48:51.283310 | orchestrator | changed: [testbed-manager] 2025-09-19 06:48:51.283321 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:51.283332 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:51.283342 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:51.283353 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:51.283364 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:51.283374 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:51.283385 | orchestrator | 2025-09-19 06:48:51.283396 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-19 06:48:51.283407 | orchestrator | Friday 19 September 2025 06:48:37 +0000 (0:00:01.395) 0:06:27.700 ****** 2025-09-19 06:48:51.283418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:48:51.283430 | orchestrator | 2025-09-19 06:48:51.283441 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-19 06:48:51.283452 | orchestrator | Friday 19 September 2025 06:48:38 +0000 (0:00:01.087) 0:06:28.787 ****** 2025-09-19 06:48:51.283463 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:51.283473 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:51.283484 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:51.283495 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:51.283506 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:51.283517 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:51.283552 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:51.283564 | orchestrator | 2025-09-19 06:48:51.283575 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-19 06:48:51.283586 | orchestrator | Friday 19 September 2025 06:48:40 +0000 (0:00:01.393) 0:06:30.181 ****** 2025-09-19 06:48:51.283597 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:51.283607 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:51.283639 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:51.283651 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:51.283662 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:51.283672 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:51.283683 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:51.283694 | orchestrator | 2025-09-19 06:48:51.283705 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-19 06:48:51.283716 | orchestrator | Friday 19 September 2025 06:48:41 +0000 (0:00:01.143) 0:06:31.325 ****** 2025-09-19 06:48:51.283726 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:51.283737 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:51.283748 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:51.283758 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:51.283769 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:51.283780 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:51.283790 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:51.283801 | orchestrator | 2025-09-19 06:48:51.283812 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-19 06:48:51.283832 | orchestrator | Friday 19 September 2025 06:48:42 +0000 (0:00:01.139) 0:06:32.465 ****** 2025-09-19 06:48:51.283843 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:51.283854 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:48:51.283864 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:48:51.283875 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:48:51.283886 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:51.283897 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:51.283907 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:51.283918 | orchestrator | 2025-09-19 06:48:51.283929 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-19 06:48:51.283940 | orchestrator | Friday 19 September 2025 06:48:43 +0000 (0:00:01.139) 0:06:33.605 ****** 2025-09-19 06:48:51.283951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:48:51.283962 | orchestrator | 2025-09-19 06:48:51.283972 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 06:48:51.283983 | orchestrator | Friday 19 September 2025 06:48:44 +0000 (0:00:01.104) 0:06:34.709 ****** 2025-09-19 06:48:51.283994 | orchestrator | 2025-09-19 06:48:51.284005 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 06:48:51.284016 | orchestrator | Friday 19 September 2025 06:48:44 +0000 (0:00:00.040) 0:06:34.749 ****** 2025-09-19 06:48:51.284026 | orchestrator | 2025-09-19 06:48:51.284037 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 06:48:51.284048 | orchestrator | Friday 19 September 2025 06:48:44 +0000 (0:00:00.038) 0:06:34.787 ****** 2025-09-19 06:48:51.284059 | orchestrator | 2025-09-19 06:48:51.284069 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 06:48:51.284080 | orchestrator | Friday 19 September 2025 06:48:44 +0000 (0:00:00.046) 0:06:34.834 ****** 2025-09-19 06:48:51.284091 | orchestrator | 2025-09-19 06:48:51.284101 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 06:48:51.284112 | orchestrator | Friday 19 September 2025 06:48:44 +0000 (0:00:00.038) 0:06:34.872 ****** 2025-09-19 06:48:51.284123 | orchestrator | 2025-09-19 06:48:51.284134 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 06:48:51.284145 | orchestrator | Friday 19 September 2025 06:48:44 +0000 (0:00:00.038) 0:06:34.911 ****** 2025-09-19 06:48:51.284155 | orchestrator | 2025-09-19 06:48:51.284166 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 06:48:51.284177 | orchestrator | Friday 19 September 2025 06:48:44 +0000 (0:00:00.045) 0:06:34.956 ****** 2025-09-19 06:48:51.284187 | orchestrator | 2025-09-19 06:48:51.284198 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 06:48:51.284209 | orchestrator | Friday 19 September 2025 06:48:44 +0000 (0:00:00.039) 0:06:34.995 ****** 2025-09-19 06:48:51.284220 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:48:51.284231 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:48:51.284242 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:48:51.284253 | orchestrator | 2025-09-19 06:48:51.284264 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-19 06:48:51.284274 | orchestrator | Friday 19 September 2025 06:48:46 +0000 (0:00:01.232) 0:06:36.228 ****** 2025-09-19 06:48:51.284285 | orchestrator | changed: [testbed-manager] 2025-09-19 06:48:51.284296 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:51.284307 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:51.284317 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:51.284328 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:51.284339 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:51.284358 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:51.284369 | orchestrator | 2025-09-19 06:48:51.284380 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-19 06:48:51.284398 | orchestrator | Friday 19 September 2025 06:48:47 +0000 (0:00:01.340) 0:06:37.569 ****** 2025-09-19 06:48:51.284409 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:48:51.284419 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:51.284430 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:51.284441 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:51.284452 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:48:51.284462 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:48:51.284473 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:48:51.284484 | orchestrator | 2025-09-19 06:48:51.284495 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-19 06:48:51.284506 | orchestrator | Friday 19 September 2025 06:48:49 +0000 (0:00:02.564) 0:06:40.134 ****** 2025-09-19 06:48:51.284517 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:48:51.284544 | orchestrator | 2025-09-19 06:48:51.284556 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-19 06:48:51.284567 | orchestrator | Friday 19 September 2025 06:48:50 +0000 (0:00:00.163) 0:06:40.297 ****** 2025-09-19 06:48:51.284578 | orchestrator | ok: [testbed-manager] 2025-09-19 06:48:51.284588 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:48:51.284599 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:48:51.284610 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:48:51.284627 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:18.245994 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:18.246217 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:18.246246 | orchestrator | 2025-09-19 06:49:18.246266 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-19 06:49:18.246285 | orchestrator | Friday 19 September 2025 06:48:51 +0000 (0:00:01.113) 0:06:41.411 ****** 2025-09-19 06:49:18.246304 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:49:18.246322 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:49:18.246339 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:49:18.246355 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:49:18.246372 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:49:18.246390 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:49:18.246407 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:49:18.246424 | orchestrator | 2025-09-19 06:49:18.246442 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-19 06:49:18.246461 | orchestrator | Friday 19 September 2025 06:48:51 +0000 (0:00:00.557) 0:06:41.968 ****** 2025-09-19 06:49:18.246481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:49:18.246503 | orchestrator | 2025-09-19 06:49:18.246522 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-19 06:49:18.246715 | orchestrator | Friday 19 September 2025 06:48:52 +0000 (0:00:01.117) 0:06:43.086 ****** 2025-09-19 06:49:18.246736 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:18.246750 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:18.246763 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:18.246776 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:18.246789 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:18.246802 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:18.246814 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:18.246826 | orchestrator | 2025-09-19 06:49:18.246839 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-19 06:49:18.246852 | orchestrator | Friday 19 September 2025 06:48:53 +0000 (0:00:00.847) 0:06:43.934 ****** 2025-09-19 06:49:18.246864 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-19 06:49:18.246877 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-19 06:49:18.246890 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-19 06:49:18.246932 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-19 06:49:18.246945 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-19 06:49:18.246958 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-19 06:49:18.246971 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-19 06:49:18.246984 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-19 06:49:18.246997 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-19 06:49:18.247008 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-19 06:49:18.247019 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-19 06:49:18.247029 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-19 06:49:18.247040 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-19 06:49:18.247067 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-19 06:49:18.247078 | orchestrator | 2025-09-19 06:49:18.247089 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-19 06:49:18.247100 | orchestrator | Friday 19 September 2025 06:48:56 +0000 (0:00:02.457) 0:06:46.391 ****** 2025-09-19 06:49:18.247111 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:49:18.247121 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:49:18.247132 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:49:18.247142 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:49:18.247153 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:49:18.247163 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:49:18.247174 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:49:18.247185 | orchestrator | 2025-09-19 06:49:18.247195 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-19 06:49:18.247207 | orchestrator | Friday 19 September 2025 06:48:56 +0000 (0:00:00.527) 0:06:46.919 ****** 2025-09-19 06:49:18.247219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:49:18.247232 | orchestrator | 2025-09-19 06:49:18.247243 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-19 06:49:18.247254 | orchestrator | Friday 19 September 2025 06:48:58 +0000 (0:00:01.233) 0:06:48.152 ****** 2025-09-19 06:49:18.247264 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:18.247275 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:18.247286 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:18.247296 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:18.247307 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:18.247317 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:18.247327 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:18.247338 | orchestrator | 2025-09-19 06:49:18.247349 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-19 06:49:18.247359 | orchestrator | Friday 19 September 2025 06:48:58 +0000 (0:00:00.839) 0:06:48.992 ****** 2025-09-19 06:49:18.247370 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:18.247381 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:18.247391 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:18.247401 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:18.247412 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:18.247422 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:18.247433 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:18.247443 | orchestrator | 2025-09-19 06:49:18.247454 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-19 06:49:18.247487 | orchestrator | Friday 19 September 2025 06:48:59 +0000 (0:00:00.888) 0:06:49.880 ****** 2025-09-19 06:49:18.247498 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:49:18.247509 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:49:18.247520 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:49:18.247577 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:49:18.247588 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:49:18.247599 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:49:18.247609 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:49:18.247620 | orchestrator | 2025-09-19 06:49:18.247631 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-19 06:49:18.247641 | orchestrator | Friday 19 September 2025 06:49:00 +0000 (0:00:00.544) 0:06:50.424 ****** 2025-09-19 06:49:18.247652 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:18.247662 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:18.247673 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:18.247683 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:18.247694 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:18.247704 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:18.247715 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:18.247725 | orchestrator | 2025-09-19 06:49:18.247736 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-19 06:49:18.247747 | orchestrator | Friday 19 September 2025 06:49:02 +0000 (0:00:01.843) 0:06:52.268 ****** 2025-09-19 06:49:18.247757 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:49:18.247768 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:49:18.247779 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:49:18.247789 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:49:18.247800 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:49:18.247810 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:49:18.247821 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:49:18.247831 | orchestrator | 2025-09-19 06:49:18.247842 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-19 06:49:18.247852 | orchestrator | Friday 19 September 2025 06:49:02 +0000 (0:00:00.514) 0:06:52.782 ****** 2025-09-19 06:49:18.247863 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:18.247873 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:18.247884 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:18.247894 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:18.247905 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:18.247915 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:18.247926 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:18.247936 | orchestrator | 2025-09-19 06:49:18.247947 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-19 06:49:18.247957 | orchestrator | Friday 19 September 2025 06:49:10 +0000 (0:00:07.915) 0:07:00.697 ****** 2025-09-19 06:49:18.247968 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:18.247978 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:18.247989 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:18.247999 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:18.248010 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:18.248020 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:18.248030 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:18.248041 | orchestrator | 2025-09-19 06:49:18.248052 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-19 06:49:18.248062 | orchestrator | Friday 19 September 2025 06:49:11 +0000 (0:00:01.333) 0:07:02.030 ****** 2025-09-19 06:49:18.248073 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:18.248083 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:18.248094 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:18.248104 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:18.248115 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:18.248131 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:18.248142 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:18.248153 | orchestrator | 2025-09-19 06:49:18.248163 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-19 06:49:18.248174 | orchestrator | Friday 19 September 2025 06:49:13 +0000 (0:00:01.766) 0:07:03.797 ****** 2025-09-19 06:49:18.248185 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:18.248203 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:18.248213 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:18.248224 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:18.248234 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:18.248245 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:18.248255 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:18.248266 | orchestrator | 2025-09-19 06:49:18.248276 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 06:49:18.248287 | orchestrator | Friday 19 September 2025 06:49:15 +0000 (0:00:01.982) 0:07:05.779 ****** 2025-09-19 06:49:18.248298 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:18.248308 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:18.248319 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:18.248329 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:18.248340 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:18.248350 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:18.248361 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:18.248371 | orchestrator | 2025-09-19 06:49:18.248382 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 06:49:18.248393 | orchestrator | Friday 19 September 2025 06:49:16 +0000 (0:00:00.890) 0:07:06.670 ****** 2025-09-19 06:49:18.248403 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:49:18.248414 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:49:18.248424 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:49:18.248435 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:49:18.248446 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:49:18.248456 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:49:18.248466 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:49:18.248477 | orchestrator | 2025-09-19 06:49:18.248488 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-19 06:49:18.248498 | orchestrator | Friday 19 September 2025 06:49:17 +0000 (0:00:01.108) 0:07:07.778 ****** 2025-09-19 06:49:18.248509 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:49:18.248519 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:49:18.248547 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:49:18.248559 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:49:18.248569 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:49:18.248580 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:49:18.248590 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:49:18.248601 | orchestrator | 2025-09-19 06:49:18.248618 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-19 06:49:51.703728 | orchestrator | Friday 19 September 2025 06:49:18 +0000 (0:00:00.594) 0:07:08.373 ****** 2025-09-19 06:49:51.703837 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:51.703854 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:51.703865 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:51.703876 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:51.703887 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:51.703898 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:51.703910 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:51.703921 | orchestrator | 2025-09-19 06:49:51.703933 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-19 06:49:51.703944 | orchestrator | Friday 19 September 2025 06:49:18 +0000 (0:00:00.573) 0:07:08.947 ****** 2025-09-19 06:49:51.703955 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:51.703966 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:51.703977 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:51.703987 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:51.703998 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:51.704009 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:51.704020 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:51.704031 | orchestrator | 2025-09-19 06:49:51.704042 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-19 06:49:51.704053 | orchestrator | Friday 19 September 2025 06:49:19 +0000 (0:00:00.571) 0:07:09.518 ****** 2025-09-19 06:49:51.704090 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:51.704101 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:51.704115 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:51.704134 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:51.704152 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:51.704171 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:51.704182 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:51.704193 | orchestrator | 2025-09-19 06:49:51.704204 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-19 06:49:51.704214 | orchestrator | Friday 19 September 2025 06:49:19 +0000 (0:00:00.558) 0:07:10.077 ****** 2025-09-19 06:49:51.704225 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:51.704236 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:51.704246 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:51.704257 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:51.704267 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:51.704278 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:51.704288 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:51.704299 | orchestrator | 2025-09-19 06:49:51.704310 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-19 06:49:51.704320 | orchestrator | Friday 19 September 2025 06:49:25 +0000 (0:00:06.004) 0:07:16.082 ****** 2025-09-19 06:49:51.704348 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:49:51.704370 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:49:51.704381 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:49:51.704392 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:49:51.704403 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:49:51.704413 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:49:51.704424 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:49:51.704435 | orchestrator | 2025-09-19 06:49:51.704446 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-19 06:49:51.704456 | orchestrator | Friday 19 September 2025 06:49:26 +0000 (0:00:00.598) 0:07:16.680 ****** 2025-09-19 06:49:51.704483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:49:51.704497 | orchestrator | 2025-09-19 06:49:51.704508 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-19 06:49:51.704519 | orchestrator | Friday 19 September 2025 06:49:27 +0000 (0:00:00.863) 0:07:17.544 ****** 2025-09-19 06:49:51.704548 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:51.704560 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:51.704571 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:51.704582 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:51.704593 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:51.704603 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:51.704614 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:51.704625 | orchestrator | 2025-09-19 06:49:51.704636 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-19 06:49:51.704646 | orchestrator | Friday 19 September 2025 06:49:29 +0000 (0:00:02.366) 0:07:19.910 ****** 2025-09-19 06:49:51.704657 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:51.704668 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:51.704678 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:51.704689 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:51.704699 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:51.704710 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:51.704721 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:51.704731 | orchestrator | 2025-09-19 06:49:51.704742 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-19 06:49:51.704753 | orchestrator | Friday 19 September 2025 06:49:30 +0000 (0:00:01.131) 0:07:21.042 ****** 2025-09-19 06:49:51.704764 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:51.704775 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:51.704794 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:51.704805 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:51.704816 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:51.704826 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:51.704837 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:51.704848 | orchestrator | 2025-09-19 06:49:51.704859 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-19 06:49:51.704869 | orchestrator | Friday 19 September 2025 06:49:31 +0000 (0:00:00.860) 0:07:21.903 ****** 2025-09-19 06:49:51.704880 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 06:49:51.704893 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 06:49:51.704904 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 06:49:51.704932 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 06:49:51.704944 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 06:49:51.704955 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 06:49:51.704966 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 06:49:51.704977 | orchestrator | 2025-09-19 06:49:51.704988 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-19 06:49:51.704998 | orchestrator | Friday 19 September 2025 06:49:33 +0000 (0:00:01.727) 0:07:23.630 ****** 2025-09-19 06:49:51.705010 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:49:51.705021 | orchestrator | 2025-09-19 06:49:51.705032 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-19 06:49:51.705246 | orchestrator | Friday 19 September 2025 06:49:34 +0000 (0:00:01.130) 0:07:24.760 ****** 2025-09-19 06:49:51.705259 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:51.705271 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:51.705282 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:51.705292 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:51.705303 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:51.705314 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:51.705324 | orchestrator | changed: [testbed-manager] 2025-09-19 06:49:51.705335 | orchestrator | 2025-09-19 06:49:51.705346 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-19 06:49:51.705357 | orchestrator | Friday 19 September 2025 06:49:43 +0000 (0:00:08.869) 0:07:33.630 ****** 2025-09-19 06:49:51.705368 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:51.705379 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:51.705389 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:51.705400 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:51.705411 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:51.705421 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:51.705432 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:51.705443 | orchestrator | 2025-09-19 06:49:51.705454 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-19 06:49:51.705465 | orchestrator | Friday 19 September 2025 06:49:45 +0000 (0:00:01.922) 0:07:35.553 ****** 2025-09-19 06:49:51.705475 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:51.705486 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:51.705506 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:51.705517 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:51.705548 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:51.705559 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:51.705570 | orchestrator | 2025-09-19 06:49:51.705581 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-19 06:49:51.705601 | orchestrator | Friday 19 September 2025 06:49:46 +0000 (0:00:01.308) 0:07:36.861 ****** 2025-09-19 06:49:51.705612 | orchestrator | changed: [testbed-manager] 2025-09-19 06:49:51.705623 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:51.705634 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:51.705644 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:51.705655 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:51.705666 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:51.705677 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:51.705687 | orchestrator | 2025-09-19 06:49:51.705698 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-19 06:49:51.705709 | orchestrator | 2025-09-19 06:49:51.705720 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-19 06:49:51.705731 | orchestrator | Friday 19 September 2025 06:49:48 +0000 (0:00:01.292) 0:07:38.153 ****** 2025-09-19 06:49:51.705742 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:49:51.705752 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:49:51.705763 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:49:51.705774 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:49:51.705784 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:49:51.705795 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:49:51.705806 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:49:51.705817 | orchestrator | 2025-09-19 06:49:51.705827 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-19 06:49:51.705838 | orchestrator | 2025-09-19 06:49:51.705849 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-19 06:49:51.705860 | orchestrator | Friday 19 September 2025 06:49:48 +0000 (0:00:00.541) 0:07:38.695 ****** 2025-09-19 06:49:51.705871 | orchestrator | changed: [testbed-manager] 2025-09-19 06:49:51.705881 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:49:51.705892 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:49:51.705903 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:49:51.705913 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:49:51.705924 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:49:51.705935 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:49:51.705945 | orchestrator | 2025-09-19 06:49:51.705956 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-19 06:49:51.705967 | orchestrator | Friday 19 September 2025 06:49:49 +0000 (0:00:01.331) 0:07:40.026 ****** 2025-09-19 06:49:51.705978 | orchestrator | ok: [testbed-manager] 2025-09-19 06:49:51.705988 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:49:51.705999 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:49:51.706010 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:49:51.706075 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:49:51.706086 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:49:51.706097 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:49:51.706108 | orchestrator | 2025-09-19 06:49:51.706120 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-19 06:49:51.706141 | orchestrator | Friday 19 September 2025 06:49:51 +0000 (0:00:01.804) 0:07:41.831 ****** 2025-09-19 06:50:15.269348 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:50:15.269460 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:50:15.269474 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:50:15.269486 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:50:15.269498 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:50:15.269509 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:50:15.269520 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:50:15.269593 | orchestrator | 2025-09-19 06:50:15.269607 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-19 06:50:15.269619 | orchestrator | Friday 19 September 2025 06:49:52 +0000 (0:00:00.528) 0:07:42.360 ****** 2025-09-19 06:50:15.269630 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:50:15.269643 | orchestrator | 2025-09-19 06:50:15.269654 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-19 06:50:15.269665 | orchestrator | Friday 19 September 2025 06:49:53 +0000 (0:00:01.096) 0:07:43.456 ****** 2025-09-19 06:50:15.269677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:50:15.269690 | orchestrator | 2025-09-19 06:50:15.269701 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-19 06:50:15.269712 | orchestrator | Friday 19 September 2025 06:49:54 +0000 (0:00:00.873) 0:07:44.330 ****** 2025-09-19 06:50:15.269723 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:50:15.269733 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:50:15.269744 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:50:15.269754 | orchestrator | changed: [testbed-manager] 2025-09-19 06:50:15.269765 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:50:15.269776 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:50:15.269786 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:50:15.269797 | orchestrator | 2025-09-19 06:50:15.269808 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-19 06:50:15.269819 | orchestrator | Friday 19 September 2025 06:50:02 +0000 (0:00:08.328) 0:07:52.659 ****** 2025-09-19 06:50:15.269829 | orchestrator | changed: [testbed-manager] 2025-09-19 06:50:15.269840 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:50:15.269851 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:50:15.269861 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:50:15.269872 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:50:15.269883 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:50:15.269895 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:50:15.269907 | orchestrator | 2025-09-19 06:50:15.269920 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-19 06:50:15.269932 | orchestrator | Friday 19 September 2025 06:50:03 +0000 (0:00:00.837) 0:07:53.496 ****** 2025-09-19 06:50:15.269944 | orchestrator | changed: [testbed-manager] 2025-09-19 06:50:15.269955 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:50:15.269967 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:50:15.269981 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:50:15.269992 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:50:15.270005 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:50:15.270074 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:50:15.270090 | orchestrator | 2025-09-19 06:50:15.270103 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-19 06:50:15.270116 | orchestrator | Friday 19 September 2025 06:50:04 +0000 (0:00:01.560) 0:07:55.057 ****** 2025-09-19 06:50:15.270128 | orchestrator | changed: [testbed-manager] 2025-09-19 06:50:15.270140 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:50:15.270153 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:50:15.270165 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:50:15.270178 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:50:15.270190 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:50:15.270203 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:50:15.270215 | orchestrator | 2025-09-19 06:50:15.270229 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-19 06:50:15.270241 | orchestrator | Friday 19 September 2025 06:50:06 +0000 (0:00:01.790) 0:07:56.848 ****** 2025-09-19 06:50:15.270252 | orchestrator | changed: [testbed-manager] 2025-09-19 06:50:15.270272 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:50:15.270283 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:50:15.270293 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:50:15.270304 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:50:15.270315 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:50:15.270326 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:50:15.270336 | orchestrator | 2025-09-19 06:50:15.270347 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-19 06:50:15.270358 | orchestrator | Friday 19 September 2025 06:50:07 +0000 (0:00:01.178) 0:07:58.026 ****** 2025-09-19 06:50:15.270369 | orchestrator | changed: [testbed-manager] 2025-09-19 06:50:15.270380 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:50:15.270391 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:50:15.270401 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:50:15.270412 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:50:15.270423 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:50:15.270433 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:50:15.270444 | orchestrator | 2025-09-19 06:50:15.270455 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-19 06:50:15.270466 | orchestrator | 2025-09-19 06:50:15.270477 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-19 06:50:15.270555 | orchestrator | Friday 19 September 2025 06:50:09 +0000 (0:00:01.240) 0:07:59.266 ****** 2025-09-19 06:50:15.270569 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:50:15.270580 | orchestrator | 2025-09-19 06:50:15.270592 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-19 06:50:15.270622 | orchestrator | Friday 19 September 2025 06:50:09 +0000 (0:00:00.734) 0:08:00.001 ****** 2025-09-19 06:50:15.270634 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:15.270647 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:15.270658 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:15.270669 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:15.270680 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:15.270690 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:15.270701 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:15.270712 | orchestrator | 2025-09-19 06:50:15.270723 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-19 06:50:15.270735 | orchestrator | Friday 19 September 2025 06:50:10 +0000 (0:00:00.905) 0:08:00.907 ****** 2025-09-19 06:50:15.270746 | orchestrator | changed: [testbed-manager] 2025-09-19 06:50:15.270757 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:50:15.270768 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:50:15.270778 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:50:15.270789 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:50:15.270800 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:50:15.270811 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:50:15.270822 | orchestrator | 2025-09-19 06:50:15.270833 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-19 06:50:15.270844 | orchestrator | Friday 19 September 2025 06:50:12 +0000 (0:00:01.449) 0:08:02.356 ****** 2025-09-19 06:50:15.270855 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 06:50:15.270866 | orchestrator | 2025-09-19 06:50:15.270877 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-19 06:50:15.270888 | orchestrator | Friday 19 September 2025 06:50:13 +0000 (0:00:00.828) 0:08:03.184 ****** 2025-09-19 06:50:15.270899 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:15.270910 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:15.270921 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:15.270932 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:15.270943 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:15.270962 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:15.270973 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:15.270983 | orchestrator | 2025-09-19 06:50:15.270995 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-19 06:50:15.271006 | orchestrator | Friday 19 September 2025 06:50:13 +0000 (0:00:00.844) 0:08:04.029 ****** 2025-09-19 06:50:15.271017 | orchestrator | changed: [testbed-manager] 2025-09-19 06:50:15.271028 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:50:15.271039 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:50:15.271050 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:50:15.271060 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:50:15.271071 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:50:15.271082 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:50:15.271093 | orchestrator | 2025-09-19 06:50:15.271104 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:50:15.271116 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-19 06:50:15.271128 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 06:50:15.271144 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 06:50:15.271155 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 06:50:15.271166 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-19 06:50:15.271177 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 06:50:15.271188 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 06:50:15.271199 | orchestrator | 2025-09-19 06:50:15.271210 | orchestrator | 2025-09-19 06:50:15.271221 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:50:15.271232 | orchestrator | Friday 19 September 2025 06:50:15 +0000 (0:00:01.355) 0:08:05.384 ****** 2025-09-19 06:50:15.271244 | orchestrator | =============================================================================== 2025-09-19 06:50:15.271255 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.66s 2025-09-19 06:50:15.271266 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.73s 2025-09-19 06:50:15.271277 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.07s 2025-09-19 06:50:15.271288 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.57s 2025-09-19 06:50:15.271299 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.02s 2025-09-19 06:50:15.271310 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.85s 2025-09-19 06:50:15.271321 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.17s 2025-09-19 06:50:15.271332 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.30s 2025-09-19 06:50:15.271343 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.13s 2025-09-19 06:50:15.271354 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.87s 2025-09-19 06:50:15.271371 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.39s 2025-09-19 06:50:15.701781 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.33s 2025-09-19 06:50:15.701868 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.25s 2025-09-19 06:50:15.701904 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.92s 2025-09-19 06:50:15.701914 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.91s 2025-09-19 06:50:15.701924 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.65s 2025-09-19 06:50:15.701933 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.92s 2025-09-19 06:50:15.701943 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.08s 2025-09-19 06:50:15.701953 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 6.00s 2025-09-19 06:50:15.701963 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.82s 2025-09-19 06:50:16.019831 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-19 06:50:16.019925 | orchestrator | + osism apply network 2025-09-19 06:50:28.814114 | orchestrator | 2025-09-19 06:50:28 | INFO  | Task 1cda9524-6a49-4c2f-9fc6-a025ff1bf525 (network) was prepared for execution. 2025-09-19 06:50:28.814232 | orchestrator | 2025-09-19 06:50:28 | INFO  | It takes a moment until task 1cda9524-6a49-4c2f-9fc6-a025ff1bf525 (network) has been started and output is visible here. 2025-09-19 06:50:57.862791 | orchestrator | 2025-09-19 06:50:57.862897 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-19 06:50:57.862912 | orchestrator | 2025-09-19 06:50:57.862922 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-19 06:50:57.862931 | orchestrator | Friday 19 September 2025 06:50:33 +0000 (0:00:00.298) 0:00:00.298 ****** 2025-09-19 06:50:57.862941 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:57.862951 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:57.862960 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:57.862969 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:57.862978 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:57.862987 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:57.862996 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:57.863005 | orchestrator | 2025-09-19 06:50:57.863013 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-19 06:50:57.863022 | orchestrator | Friday 19 September 2025 06:50:34 +0000 (0:00:00.740) 0:00:01.039 ****** 2025-09-19 06:50:57.863032 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:50:57.863044 | orchestrator | 2025-09-19 06:50:57.863053 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-19 06:50:57.863062 | orchestrator | Friday 19 September 2025 06:50:35 +0000 (0:00:01.243) 0:00:02.282 ****** 2025-09-19 06:50:57.863070 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:57.863079 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:57.863088 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:57.863097 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:57.863105 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:57.863114 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:57.863122 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:57.863131 | orchestrator | 2025-09-19 06:50:57.863140 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-19 06:50:57.863149 | orchestrator | Friday 19 September 2025 06:50:37 +0000 (0:00:01.894) 0:00:04.177 ****** 2025-09-19 06:50:57.863157 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:57.863166 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:57.863175 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:57.863183 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:57.863192 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:57.863201 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:57.863209 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:57.863218 | orchestrator | 2025-09-19 06:50:57.863227 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-19 06:50:57.863259 | orchestrator | Friday 19 September 2025 06:50:38 +0000 (0:00:01.742) 0:00:05.920 ****** 2025-09-19 06:50:57.863268 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-19 06:50:57.863277 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-19 06:50:57.863286 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-19 06:50:57.863294 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-19 06:50:57.863303 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-19 06:50:57.863312 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-19 06:50:57.863320 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-19 06:50:57.863329 | orchestrator | 2025-09-19 06:50:57.863338 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-19 06:50:57.863346 | orchestrator | Friday 19 September 2025 06:50:39 +0000 (0:00:01.010) 0:00:06.931 ****** 2025-09-19 06:50:57.863355 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 06:50:57.863364 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 06:50:57.863373 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 06:50:57.863381 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 06:50:57.863390 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 06:50:57.863398 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 06:50:57.863407 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 06:50:57.863416 | orchestrator | 2025-09-19 06:50:57.863424 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-19 06:50:57.863433 | orchestrator | Friday 19 September 2025 06:50:43 +0000 (0:00:03.397) 0:00:10.328 ****** 2025-09-19 06:50:57.863441 | orchestrator | changed: [testbed-manager] 2025-09-19 06:50:57.863451 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:50:57.863459 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:50:57.863468 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:50:57.863476 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:50:57.863485 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:50:57.863493 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:50:57.863502 | orchestrator | 2025-09-19 06:50:57.863511 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-19 06:50:57.863519 | orchestrator | Friday 19 September 2025 06:50:44 +0000 (0:00:01.474) 0:00:11.803 ****** 2025-09-19 06:50:57.863550 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 06:50:57.863558 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 06:50:57.863567 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 06:50:57.863576 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 06:50:57.863584 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 06:50:57.863593 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 06:50:57.863601 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 06:50:57.863610 | orchestrator | 2025-09-19 06:50:57.863619 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-19 06:50:57.863627 | orchestrator | Friday 19 September 2025 06:50:46 +0000 (0:00:02.046) 0:00:13.849 ****** 2025-09-19 06:50:57.863636 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:57.863645 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:57.863653 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:57.863662 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:57.863670 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:57.863679 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:57.863687 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:57.863696 | orchestrator | 2025-09-19 06:50:57.863704 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-19 06:50:57.863730 | orchestrator | Friday 19 September 2025 06:50:47 +0000 (0:00:01.118) 0:00:14.967 ****** 2025-09-19 06:50:57.863739 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:50:57.863748 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:50:57.863757 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:50:57.863773 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:50:57.863782 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:50:57.863791 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:50:57.863799 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:50:57.863808 | orchestrator | 2025-09-19 06:50:57.863817 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-19 06:50:57.863826 | orchestrator | Friday 19 September 2025 06:50:48 +0000 (0:00:00.690) 0:00:15.658 ****** 2025-09-19 06:50:57.863834 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:57.863843 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:57.863852 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:57.863860 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:57.863869 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:57.863878 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:57.863886 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:57.863895 | orchestrator | 2025-09-19 06:50:57.863904 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-19 06:50:57.863913 | orchestrator | Friday 19 September 2025 06:50:50 +0000 (0:00:01.974) 0:00:17.632 ****** 2025-09-19 06:50:57.863921 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:50:57.863930 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:50:57.863939 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:50:57.863947 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:50:57.863956 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:50:57.863979 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:50:57.863989 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-19 06:50:57.863998 | orchestrator | 2025-09-19 06:50:57.864007 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-19 06:50:57.864016 | orchestrator | Friday 19 September 2025 06:50:51 +0000 (0:00:00.973) 0:00:18.606 ****** 2025-09-19 06:50:57.864025 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:57.864034 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:50:57.864044 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:50:57.864058 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:50:57.864073 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:50:57.864088 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:50:57.864102 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:50:57.864116 | orchestrator | 2025-09-19 06:50:57.864130 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-19 06:50:57.864144 | orchestrator | Friday 19 September 2025 06:50:53 +0000 (0:00:01.725) 0:00:20.332 ****** 2025-09-19 06:50:57.864159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:50:57.864177 | orchestrator | 2025-09-19 06:50:57.864191 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-19 06:50:57.864206 | orchestrator | Friday 19 September 2025 06:50:54 +0000 (0:00:01.358) 0:00:21.691 ****** 2025-09-19 06:50:57.864221 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:57.864231 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:57.864239 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:57.864248 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:57.864256 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:57.864265 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:57.864273 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:57.864282 | orchestrator | 2025-09-19 06:50:57.864290 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-19 06:50:57.864299 | orchestrator | Friday 19 September 2025 06:50:55 +0000 (0:00:01.033) 0:00:22.725 ****** 2025-09-19 06:50:57.864308 | orchestrator | ok: [testbed-manager] 2025-09-19 06:50:57.864316 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:50:57.864325 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:50:57.864341 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:50:57.864350 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:50:57.864359 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:50:57.864367 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:50:57.864376 | orchestrator | 2025-09-19 06:50:57.864385 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-19 06:50:57.864393 | orchestrator | Friday 19 September 2025 06:50:56 +0000 (0:00:00.870) 0:00:23.595 ****** 2025-09-19 06:50:57.864402 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 06:50:57.864411 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 06:50:57.864420 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 06:50:57.864428 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 06:50:57.864437 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 06:50:57.864446 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 06:50:57.864455 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 06:50:57.864463 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 06:50:57.864472 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 06:50:57.864481 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 06:50:57.864489 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 06:50:57.864498 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 06:50:57.864506 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 06:50:57.864515 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 06:50:57.864542 | orchestrator | 2025-09-19 06:50:57.864564 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-19 06:51:14.252148 | orchestrator | Friday 19 September 2025 06:50:57 +0000 (0:00:01.277) 0:00:24.873 ****** 2025-09-19 06:51:14.252274 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:51:14.252298 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:51:14.252317 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:51:14.252338 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:51:14.252358 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:51:14.252378 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:51:14.252399 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:51:14.252420 | orchestrator | 2025-09-19 06:51:14.252441 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-19 06:51:14.252461 | orchestrator | Friday 19 September 2025 06:50:58 +0000 (0:00:00.669) 0:00:25.542 ****** 2025-09-19 06:51:14.252482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-1, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-2, testbed-node-5 2025-09-19 06:51:14.252505 | orchestrator | 2025-09-19 06:51:14.252553 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-19 06:51:14.252576 | orchestrator | Friday 19 September 2025 06:51:03 +0000 (0:00:04.937) 0:00:30.480 ****** 2025-09-19 06:51:14.252615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:51:14.252639 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:51:14.252693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:51:14.252717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:51:14.252744 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:51:14.252766 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:51:14.252789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:51:14.252810 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:51:14.252831 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:51:14.252851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:51:14.252879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:51:14.252923 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:51:14.252944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:51:14.252965 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:51:14.252985 | orchestrator | 2025-09-19 06:51:14.253005 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-19 06:51:14.253026 | orchestrator | Friday 19 September 2025 06:51:09 +0000 (0:00:05.766) 0:00:36.247 ****** 2025-09-19 06:51:14.253047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:51:14.253092 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:51:14.253113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:51:14.253130 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:51:14.253149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:51:14.253169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:51:14.253189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-19 06:51:14.253208 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:51:14.253227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:51:14.253240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:51:14.253251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:51:14.253262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:51:14.253285 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:51:20.207835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-19 06:51:20.207938 | orchestrator | 2025-09-19 06:51:20.207953 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-19 06:51:20.207965 | orchestrator | Friday 19 September 2025 06:51:14 +0000 (0:00:05.020) 0:00:41.267 ****** 2025-09-19 06:51:20.207999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:51:20.208011 | orchestrator | 2025-09-19 06:51:20.208021 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-19 06:51:20.208031 | orchestrator | Friday 19 September 2025 06:51:15 +0000 (0:00:01.200) 0:00:42.467 ****** 2025-09-19 06:51:20.208040 | orchestrator | ok: [testbed-manager] 2025-09-19 06:51:20.208051 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:51:20.208061 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:51:20.208070 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:51:20.208080 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:51:20.208089 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:51:20.208099 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:51:20.208109 | orchestrator | 2025-09-19 06:51:20.208119 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-19 06:51:20.208129 | orchestrator | Friday 19 September 2025 06:51:16 +0000 (0:00:01.094) 0:00:43.562 ****** 2025-09-19 06:51:20.208139 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 06:51:20.208149 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 06:51:20.208158 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 06:51:20.208168 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 06:51:20.208178 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:51:20.208188 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 06:51:20.208216 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 06:51:20.208227 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 06:51:20.208237 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 06:51:20.208246 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:51:20.208256 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 06:51:20.208266 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 06:51:20.208275 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 06:51:20.208285 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 06:51:20.208295 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:51:20.208304 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 06:51:20.208314 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 06:51:20.208323 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 06:51:20.208333 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 06:51:20.208342 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:51:20.208352 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 06:51:20.208361 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 06:51:20.208371 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 06:51:20.208380 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 06:51:20.208390 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:51:20.208399 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 06:51:20.208409 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 06:51:20.208428 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 06:51:20.208437 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 06:51:20.208447 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:51:20.208457 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 06:51:20.208466 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 06:51:20.208476 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 06:51:20.208485 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 06:51:20.208495 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:51:20.208504 | orchestrator | 2025-09-19 06:51:20.208514 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-19 06:51:20.208566 | orchestrator | Friday 19 September 2025 06:51:18 +0000 (0:00:02.001) 0:00:45.564 ****** 2025-09-19 06:51:20.208577 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:51:20.208587 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:51:20.208596 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:51:20.208606 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:51:20.208615 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:51:20.208625 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:51:20.208635 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:51:20.208644 | orchestrator | 2025-09-19 06:51:20.208654 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-19 06:51:20.208664 | orchestrator | Friday 19 September 2025 06:51:19 +0000 (0:00:00.617) 0:00:46.181 ****** 2025-09-19 06:51:20.208673 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:51:20.208683 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:51:20.208692 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:51:20.208702 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:51:20.208712 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:51:20.208721 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:51:20.208731 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:51:20.208740 | orchestrator | 2025-09-19 06:51:20.208750 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:51:20.208766 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 06:51:20.208778 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:51:20.208787 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:51:20.208797 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:51:20.208807 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:51:20.208817 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:51:20.208826 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 06:51:20.208836 | orchestrator | 2025-09-19 06:51:20.208846 | orchestrator | 2025-09-19 06:51:20.208856 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:51:20.208865 | orchestrator | Friday 19 September 2025 06:51:19 +0000 (0:00:00.685) 0:00:46.866 ****** 2025-09-19 06:51:20.208875 | orchestrator | =============================================================================== 2025-09-19 06:51:20.208891 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.77s 2025-09-19 06:51:20.208901 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.02s 2025-09-19 06:51:20.208910 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.94s 2025-09-19 06:51:20.208920 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.40s 2025-09-19 06:51:20.208930 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.05s 2025-09-19 06:51:20.208939 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.00s 2025-09-19 06:51:20.208949 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.97s 2025-09-19 06:51:20.208958 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.89s 2025-09-19 06:51:20.208968 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.74s 2025-09-19 06:51:20.208978 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.73s 2025-09-19 06:51:20.208987 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.47s 2025-09-19 06:51:20.208997 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.36s 2025-09-19 06:51:20.209007 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.28s 2025-09-19 06:51:20.209016 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.24s 2025-09-19 06:51:20.209026 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.20s 2025-09-19 06:51:20.209036 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.12s 2025-09-19 06:51:20.209045 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.09s 2025-09-19 06:51:20.209055 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.03s 2025-09-19 06:51:20.209064 | orchestrator | osism.commons.network : Create required directories --------------------- 1.01s 2025-09-19 06:51:20.209074 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.97s 2025-09-19 06:51:20.534650 | orchestrator | + osism apply wireguard 2025-09-19 06:51:32.697278 | orchestrator | 2025-09-19 06:51:32 | INFO  | Task 438acea0-f532-4bf1-8a06-312c5b45c12a (wireguard) was prepared for execution. 2025-09-19 06:51:32.697386 | orchestrator | 2025-09-19 06:51:32 | INFO  | It takes a moment until task 438acea0-f532-4bf1-8a06-312c5b45c12a (wireguard) has been started and output is visible here. 2025-09-19 06:51:52.928373 | orchestrator | 2025-09-19 06:51:52.928479 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-19 06:51:52.928494 | orchestrator | 2025-09-19 06:51:52.928505 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-19 06:51:52.928515 | orchestrator | Friday 19 September 2025 06:51:36 +0000 (0:00:00.223) 0:00:00.223 ****** 2025-09-19 06:51:52.928570 | orchestrator | ok: [testbed-manager] 2025-09-19 06:51:52.928582 | orchestrator | 2025-09-19 06:51:52.928591 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-19 06:51:52.928601 | orchestrator | Friday 19 September 2025 06:51:38 +0000 (0:00:01.574) 0:00:01.798 ****** 2025-09-19 06:51:52.928611 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:52.928621 | orchestrator | 2025-09-19 06:51:52.928631 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-19 06:51:52.928645 | orchestrator | Friday 19 September 2025 06:51:44 +0000 (0:00:06.499) 0:00:08.297 ****** 2025-09-19 06:51:52.928662 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:52.928678 | orchestrator | 2025-09-19 06:51:52.928695 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-19 06:51:52.928712 | orchestrator | Friday 19 September 2025 06:51:45 +0000 (0:00:00.565) 0:00:08.863 ****** 2025-09-19 06:51:52.928751 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:52.928792 | orchestrator | 2025-09-19 06:51:52.928803 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-19 06:51:52.928814 | orchestrator | Friday 19 September 2025 06:51:45 +0000 (0:00:00.466) 0:00:09.329 ****** 2025-09-19 06:51:52.928824 | orchestrator | ok: [testbed-manager] 2025-09-19 06:51:52.928833 | orchestrator | 2025-09-19 06:51:52.928843 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-19 06:51:52.928852 | orchestrator | Friday 19 September 2025 06:51:46 +0000 (0:00:00.554) 0:00:09.884 ****** 2025-09-19 06:51:52.928861 | orchestrator | ok: [testbed-manager] 2025-09-19 06:51:52.928871 | orchestrator | 2025-09-19 06:51:52.928880 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-19 06:51:52.928890 | orchestrator | Friday 19 September 2025 06:51:47 +0000 (0:00:00.554) 0:00:10.438 ****** 2025-09-19 06:51:52.928899 | orchestrator | ok: [testbed-manager] 2025-09-19 06:51:52.928909 | orchestrator | 2025-09-19 06:51:52.928919 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-19 06:51:52.928930 | orchestrator | Friday 19 September 2025 06:51:47 +0000 (0:00:00.438) 0:00:10.876 ****** 2025-09-19 06:51:52.928941 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:52.928952 | orchestrator | 2025-09-19 06:51:52.928962 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-19 06:51:52.928973 | orchestrator | Friday 19 September 2025 06:51:48 +0000 (0:00:01.209) 0:00:12.085 ****** 2025-09-19 06:51:52.928984 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 06:51:52.928995 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:52.929006 | orchestrator | 2025-09-19 06:51:52.929016 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-19 06:51:52.929027 | orchestrator | Friday 19 September 2025 06:51:49 +0000 (0:00:00.946) 0:00:13.032 ****** 2025-09-19 06:51:52.929038 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:52.929050 | orchestrator | 2025-09-19 06:51:52.929061 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-19 06:51:52.929071 | orchestrator | Friday 19 September 2025 06:51:51 +0000 (0:00:01.837) 0:00:14.869 ****** 2025-09-19 06:51:52.929082 | orchestrator | changed: [testbed-manager] 2025-09-19 06:51:52.929094 | orchestrator | 2025-09-19 06:51:52.929106 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:51:52.929117 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:51:52.929129 | orchestrator | 2025-09-19 06:51:52.929140 | orchestrator | 2025-09-19 06:51:52.929151 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:51:52.929162 | orchestrator | Friday 19 September 2025 06:51:52 +0000 (0:00:01.008) 0:00:15.878 ****** 2025-09-19 06:51:52.929174 | orchestrator | =============================================================================== 2025-09-19 06:51:52.929184 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.50s 2025-09-19 06:51:52.929195 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.84s 2025-09-19 06:51:52.929206 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.57s 2025-09-19 06:51:52.929217 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.21s 2025-09-19 06:51:52.929227 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.01s 2025-09-19 06:51:52.929239 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.95s 2025-09-19 06:51:52.929250 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-09-19 06:51:52.929261 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.55s 2025-09-19 06:51:52.929272 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.55s 2025-09-19 06:51:52.929283 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.47s 2025-09-19 06:51:52.929300 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2025-09-19 06:51:53.311777 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-19 06:51:53.350233 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-19 06:51:53.350312 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-19 06:51:53.427181 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 198 0 --:--:-- --:--:-- --:--:-- 200 2025-09-19 06:51:53.440069 | orchestrator | + osism apply --environment custom workarounds 2025-09-19 06:51:55.345659 | orchestrator | 2025-09-19 06:51:55 | INFO  | Trying to run play workarounds in environment custom 2025-09-19 06:52:05.451233 | orchestrator | 2025-09-19 06:52:05 | INFO  | Task 95f60671-909b-43f5-9b63-ac5232810997 (workarounds) was prepared for execution. 2025-09-19 06:52:05.451346 | orchestrator | 2025-09-19 06:52:05 | INFO  | It takes a moment until task 95f60671-909b-43f5-9b63-ac5232810997 (workarounds) has been started and output is visible here. 2025-09-19 06:52:30.406576 | orchestrator | 2025-09-19 06:52:30.406674 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 06:52:30.406687 | orchestrator | 2025-09-19 06:52:30.406697 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-19 06:52:30.406705 | orchestrator | Friday 19 September 2025 06:52:09 +0000 (0:00:00.145) 0:00:00.145 ****** 2025-09-19 06:52:30.406714 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-19 06:52:30.406722 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-19 06:52:30.406737 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-19 06:52:30.406745 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-19 06:52:30.406753 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-19 06:52:30.406761 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-19 06:52:30.406769 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-19 06:52:30.406777 | orchestrator | 2025-09-19 06:52:30.406785 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-19 06:52:30.406793 | orchestrator | 2025-09-19 06:52:30.406800 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-19 06:52:30.406808 | orchestrator | Friday 19 September 2025 06:52:10 +0000 (0:00:00.763) 0:00:00.909 ****** 2025-09-19 06:52:30.406817 | orchestrator | ok: [testbed-manager] 2025-09-19 06:52:30.406826 | orchestrator | 2025-09-19 06:52:30.406834 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-19 06:52:30.406841 | orchestrator | 2025-09-19 06:52:30.406849 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-19 06:52:30.406857 | orchestrator | Friday 19 September 2025 06:52:12 +0000 (0:00:02.306) 0:00:03.215 ****** 2025-09-19 06:52:30.406865 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:52:30.406873 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:52:30.406881 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:52:30.406889 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:52:30.406896 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:52:30.406904 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:52:30.406912 | orchestrator | 2025-09-19 06:52:30.406921 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-19 06:52:30.406929 | orchestrator | 2025-09-19 06:52:30.406937 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-19 06:52:30.406945 | orchestrator | Friday 19 September 2025 06:52:14 +0000 (0:00:01.800) 0:00:05.016 ****** 2025-09-19 06:52:30.406953 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 06:52:30.406962 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 06:52:30.406984 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 06:52:30.406993 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 06:52:30.407001 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 06:52:30.407008 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 06:52:30.407016 | orchestrator | 2025-09-19 06:52:30.407024 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-19 06:52:30.407032 | orchestrator | Friday 19 September 2025 06:52:15 +0000 (0:00:01.439) 0:00:06.455 ****** 2025-09-19 06:52:30.407040 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:52:30.407048 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:52:30.407055 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:52:30.407063 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:52:30.407071 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:52:30.407079 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:52:30.407086 | orchestrator | 2025-09-19 06:52:30.407094 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-19 06:52:30.407102 | orchestrator | Friday 19 September 2025 06:52:19 +0000 (0:00:03.635) 0:00:10.091 ****** 2025-09-19 06:52:30.407112 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:52:30.407121 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:52:30.407130 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:52:30.407139 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:52:30.407147 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:52:30.407156 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:52:30.407165 | orchestrator | 2025-09-19 06:52:30.407175 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-19 06:52:30.407184 | orchestrator | 2025-09-19 06:52:30.407193 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-19 06:52:30.407202 | orchestrator | Friday 19 September 2025 06:52:19 +0000 (0:00:00.687) 0:00:10.778 ****** 2025-09-19 06:52:30.407211 | orchestrator | changed: [testbed-manager] 2025-09-19 06:52:30.407219 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:52:30.407228 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:52:30.407238 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:52:30.407246 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:52:30.407255 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:52:30.407264 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:52:30.407273 | orchestrator | 2025-09-19 06:52:30.407282 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-19 06:52:30.407291 | orchestrator | Friday 19 September 2025 06:52:21 +0000 (0:00:01.698) 0:00:12.476 ****** 2025-09-19 06:52:30.407301 | orchestrator | changed: [testbed-manager] 2025-09-19 06:52:30.407310 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:52:30.407319 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:52:30.407328 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:52:30.407336 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:52:30.407346 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:52:30.407368 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:52:30.407377 | orchestrator | 2025-09-19 06:52:30.407386 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-19 06:52:30.407396 | orchestrator | Friday 19 September 2025 06:52:23 +0000 (0:00:01.729) 0:00:14.206 ****** 2025-09-19 06:52:30.407406 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:52:30.407415 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:52:30.407424 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:52:30.407432 | orchestrator | ok: [testbed-manager] 2025-09-19 06:52:30.407441 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:52:30.407455 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:52:30.407464 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:52:30.407473 | orchestrator | 2025-09-19 06:52:30.407485 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-19 06:52:30.407493 | orchestrator | Friday 19 September 2025 06:52:24 +0000 (0:00:01.555) 0:00:15.762 ****** 2025-09-19 06:52:30.407501 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:52:30.407509 | orchestrator | changed: [testbed-manager] 2025-09-19 06:52:30.407516 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:52:30.407538 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:52:30.407546 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:52:30.407554 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:52:30.407561 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:52:30.407569 | orchestrator | 2025-09-19 06:52:30.407577 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-19 06:52:30.407585 | orchestrator | Friday 19 September 2025 06:52:26 +0000 (0:00:01.911) 0:00:17.673 ****** 2025-09-19 06:52:30.407593 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:52:30.407600 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:52:30.407608 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:52:30.407616 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:52:30.407624 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:52:30.407631 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:52:30.407639 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:52:30.407647 | orchestrator | 2025-09-19 06:52:30.407655 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-19 06:52:30.407663 | orchestrator | 2025-09-19 06:52:30.407670 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-19 06:52:30.407678 | orchestrator | Friday 19 September 2025 06:52:27 +0000 (0:00:00.675) 0:00:18.349 ****** 2025-09-19 06:52:30.407686 | orchestrator | ok: [testbed-manager] 2025-09-19 06:52:30.407694 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:52:30.407702 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:52:30.407709 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:52:30.407717 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:52:30.407725 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:52:30.407733 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:52:30.407740 | orchestrator | 2025-09-19 06:52:30.407748 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:52:30.407757 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:52:30.407766 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:30.407774 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:30.407782 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:30.407789 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:30.407797 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:30.407805 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:30.407813 | orchestrator | 2025-09-19 06:52:30.407821 | orchestrator | 2025-09-19 06:52:30.407829 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:52:30.407837 | orchestrator | Friday 19 September 2025 06:52:30 +0000 (0:00:02.819) 0:00:21.168 ****** 2025-09-19 06:52:30.407850 | orchestrator | =============================================================================== 2025-09-19 06:52:30.407858 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.64s 2025-09-19 06:52:30.407865 | orchestrator | Install python3-docker -------------------------------------------------- 2.82s 2025-09-19 06:52:30.407873 | orchestrator | Apply netplan configuration --------------------------------------------- 2.31s 2025-09-19 06:52:30.407881 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.91s 2025-09-19 06:52:30.407889 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2025-09-19 06:52:30.407897 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.73s 2025-09-19 06:52:30.407904 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.70s 2025-09-19 06:52:30.407912 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.56s 2025-09-19 06:52:30.407920 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.44s 2025-09-19 06:52:30.407928 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.76s 2025-09-19 06:52:30.407936 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.69s 2025-09-19 06:52:30.407948 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.68s 2025-09-19 06:52:31.097821 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-19 06:52:43.147431 | orchestrator | 2025-09-19 06:52:43 | INFO  | Task b5b20604-b51a-49b2-9132-435c983c8912 (reboot) was prepared for execution. 2025-09-19 06:52:43.147611 | orchestrator | 2025-09-19 06:52:43 | INFO  | It takes a moment until task b5b20604-b51a-49b2-9132-435c983c8912 (reboot) has been started and output is visible here. 2025-09-19 06:52:52.840636 | orchestrator | 2025-09-19 06:52:52.840750 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 06:52:52.840767 | orchestrator | 2025-09-19 06:52:52.840779 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 06:52:52.840791 | orchestrator | Friday 19 September 2025 06:52:47 +0000 (0:00:00.160) 0:00:00.160 ****** 2025-09-19 06:52:52.840802 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:52:52.840814 | orchestrator | 2025-09-19 06:52:52.840825 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 06:52:52.840836 | orchestrator | Friday 19 September 2025 06:52:47 +0000 (0:00:00.086) 0:00:00.247 ****** 2025-09-19 06:52:52.840847 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:52:52.840858 | orchestrator | 2025-09-19 06:52:52.840869 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 06:52:52.840880 | orchestrator | Friday 19 September 2025 06:52:48 +0000 (0:00:00.912) 0:00:01.160 ****** 2025-09-19 06:52:52.840891 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:52:52.840902 | orchestrator | 2025-09-19 06:52:52.840928 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 06:52:52.840949 | orchestrator | 2025-09-19 06:52:52.840960 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 06:52:52.840972 | orchestrator | Friday 19 September 2025 06:52:48 +0000 (0:00:00.117) 0:00:01.278 ****** 2025-09-19 06:52:52.840982 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:52:52.840993 | orchestrator | 2025-09-19 06:52:52.841004 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 06:52:52.841015 | orchestrator | Friday 19 September 2025 06:52:48 +0000 (0:00:00.092) 0:00:01.371 ****** 2025-09-19 06:52:52.841026 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:52:52.841037 | orchestrator | 2025-09-19 06:52:52.841048 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 06:52:52.841059 | orchestrator | Friday 19 September 2025 06:52:48 +0000 (0:00:00.632) 0:00:02.003 ****** 2025-09-19 06:52:52.841070 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:52:52.841104 | orchestrator | 2025-09-19 06:52:52.841115 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 06:52:52.841126 | orchestrator | 2025-09-19 06:52:52.841137 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 06:52:52.841150 | orchestrator | Friday 19 September 2025 06:52:48 +0000 (0:00:00.117) 0:00:02.121 ****** 2025-09-19 06:52:52.841162 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:52:52.841175 | orchestrator | 2025-09-19 06:52:52.841187 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 06:52:52.841200 | orchestrator | Friday 19 September 2025 06:52:49 +0000 (0:00:00.170) 0:00:02.292 ****** 2025-09-19 06:52:52.841212 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:52:52.841224 | orchestrator | 2025-09-19 06:52:52.841237 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 06:52:52.841251 | orchestrator | Friday 19 September 2025 06:52:49 +0000 (0:00:00.657) 0:00:02.949 ****** 2025-09-19 06:52:52.841263 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:52:52.841275 | orchestrator | 2025-09-19 06:52:52.841288 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 06:52:52.841300 | orchestrator | 2025-09-19 06:52:52.841313 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 06:52:52.841325 | orchestrator | Friday 19 September 2025 06:52:49 +0000 (0:00:00.116) 0:00:03.066 ****** 2025-09-19 06:52:52.841338 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:52:52.841350 | orchestrator | 2025-09-19 06:52:52.841362 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 06:52:52.841375 | orchestrator | Friday 19 September 2025 06:52:50 +0000 (0:00:00.108) 0:00:03.174 ****** 2025-09-19 06:52:52.841387 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:52:52.841399 | orchestrator | 2025-09-19 06:52:52.841411 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 06:52:52.841424 | orchestrator | Friday 19 September 2025 06:52:50 +0000 (0:00:00.661) 0:00:03.835 ****** 2025-09-19 06:52:52.841437 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:52:52.841449 | orchestrator | 2025-09-19 06:52:52.841462 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 06:52:52.841474 | orchestrator | 2025-09-19 06:52:52.841486 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 06:52:52.841499 | orchestrator | Friday 19 September 2025 06:52:50 +0000 (0:00:00.113) 0:00:03.948 ****** 2025-09-19 06:52:52.841511 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:52:52.841545 | orchestrator | 2025-09-19 06:52:52.841556 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 06:52:52.841567 | orchestrator | Friday 19 September 2025 06:52:50 +0000 (0:00:00.093) 0:00:04.042 ****** 2025-09-19 06:52:52.841578 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:52:52.841589 | orchestrator | 2025-09-19 06:52:52.841600 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 06:52:52.841611 | orchestrator | Friday 19 September 2025 06:52:51 +0000 (0:00:00.672) 0:00:04.714 ****** 2025-09-19 06:52:52.841621 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:52:52.841632 | orchestrator | 2025-09-19 06:52:52.841643 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 06:52:52.841654 | orchestrator | 2025-09-19 06:52:52.841665 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 06:52:52.841675 | orchestrator | Friday 19 September 2025 06:52:51 +0000 (0:00:00.123) 0:00:04.838 ****** 2025-09-19 06:52:52.841686 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:52:52.841697 | orchestrator | 2025-09-19 06:52:52.841708 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 06:52:52.841719 | orchestrator | Friday 19 September 2025 06:52:51 +0000 (0:00:00.118) 0:00:04.957 ****** 2025-09-19 06:52:52.841729 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:52:52.841740 | orchestrator | 2025-09-19 06:52:52.841751 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 06:52:52.841770 | orchestrator | Friday 19 September 2025 06:52:52 +0000 (0:00:00.665) 0:00:05.623 ****** 2025-09-19 06:52:52.841800 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:52:52.841811 | orchestrator | 2025-09-19 06:52:52.841822 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:52:52.841834 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:52.841846 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:52.841857 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:52.841867 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:52.841878 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:52.841889 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:52:52.841900 | orchestrator | 2025-09-19 06:52:52.841911 | orchestrator | 2025-09-19 06:52:52.841922 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:52:52.841933 | orchestrator | Friday 19 September 2025 06:52:52 +0000 (0:00:00.040) 0:00:05.663 ****** 2025-09-19 06:52:52.841944 | orchestrator | =============================================================================== 2025-09-19 06:52:52.841954 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.20s 2025-09-19 06:52:52.841970 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.67s 2025-09-19 06:52:52.841981 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2025-09-19 06:52:53.122812 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-19 06:53:05.228858 | orchestrator | 2025-09-19 06:53:05 | INFO  | Task 32e97c03-da5e-4adb-a1bd-5cbf8aa12c02 (wait-for-connection) was prepared for execution. 2025-09-19 06:53:05.228971 | orchestrator | 2025-09-19 06:53:05 | INFO  | It takes a moment until task 32e97c03-da5e-4adb-a1bd-5cbf8aa12c02 (wait-for-connection) has been started and output is visible here. 2025-09-19 06:53:21.087269 | orchestrator | 2025-09-19 06:53:21.087388 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-19 06:53:21.087405 | orchestrator | 2025-09-19 06:53:21.087417 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-19 06:53:21.087429 | orchestrator | Friday 19 September 2025 06:53:09 +0000 (0:00:00.235) 0:00:00.235 ****** 2025-09-19 06:53:21.087440 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:53:21.087453 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:53:21.087464 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:53:21.087474 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:53:21.087485 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:53:21.087496 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:53:21.087507 | orchestrator | 2025-09-19 06:53:21.087518 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:53:21.087609 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:53:21.087623 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:53:21.087634 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:53:21.087675 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:53:21.087706 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:53:21.087718 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:53:21.087729 | orchestrator | 2025-09-19 06:53:21.087740 | orchestrator | 2025-09-19 06:53:21.087751 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:53:21.087762 | orchestrator | Friday 19 September 2025 06:53:20 +0000 (0:00:11.515) 0:00:11.750 ****** 2025-09-19 06:53:21.087773 | orchestrator | =============================================================================== 2025-09-19 06:53:21.087784 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.52s 2025-09-19 06:53:21.372010 | orchestrator | + osism apply hddtemp 2025-09-19 06:53:33.583083 | orchestrator | 2025-09-19 06:53:33 | INFO  | Task ec33e7bc-a594-413e-be77-3b512ee20164 (hddtemp) was prepared for execution. 2025-09-19 06:53:33.583194 | orchestrator | 2025-09-19 06:53:33 | INFO  | It takes a moment until task ec33e7bc-a594-413e-be77-3b512ee20164 (hddtemp) has been started and output is visible here. 2025-09-19 06:54:00.349199 | orchestrator | 2025-09-19 06:54:00.349321 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-19 06:54:00.349338 | orchestrator | 2025-09-19 06:54:00.349360 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-19 06:54:00.349373 | orchestrator | Friday 19 September 2025 06:53:37 +0000 (0:00:00.259) 0:00:00.259 ****** 2025-09-19 06:54:00.349384 | orchestrator | ok: [testbed-manager] 2025-09-19 06:54:00.349396 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:54:00.349407 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:54:00.349418 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:54:00.349429 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:54:00.349440 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:54:00.349451 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:54:00.349462 | orchestrator | 2025-09-19 06:54:00.349474 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-19 06:54:00.349485 | orchestrator | Friday 19 September 2025 06:53:38 +0000 (0:00:00.685) 0:00:00.944 ****** 2025-09-19 06:54:00.349497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:54:00.349511 | orchestrator | 2025-09-19 06:54:00.349551 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-19 06:54:00.349564 | orchestrator | Friday 19 September 2025 06:53:39 +0000 (0:00:01.212) 0:00:02.157 ****** 2025-09-19 06:54:00.349576 | orchestrator | ok: [testbed-manager] 2025-09-19 06:54:00.349587 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:54:00.349598 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:54:00.349609 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:54:00.349620 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:54:00.349630 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:54:00.349641 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:54:00.349652 | orchestrator | 2025-09-19 06:54:00.349663 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-19 06:54:00.349674 | orchestrator | Friday 19 September 2025 06:53:41 +0000 (0:00:02.024) 0:00:04.181 ****** 2025-09-19 06:54:00.349685 | orchestrator | changed: [testbed-manager] 2025-09-19 06:54:00.349697 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:54:00.349708 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:54:00.349719 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:54:00.349730 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:54:00.349762 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:54:00.349775 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:54:00.349787 | orchestrator | 2025-09-19 06:54:00.349800 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-19 06:54:00.349813 | orchestrator | Friday 19 September 2025 06:53:42 +0000 (0:00:01.156) 0:00:05.337 ****** 2025-09-19 06:54:00.349825 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:54:00.349838 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:54:00.349850 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:54:00.349863 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:54:00.349875 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:54:00.349888 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:54:00.349900 | orchestrator | ok: [testbed-manager] 2025-09-19 06:54:00.349913 | orchestrator | 2025-09-19 06:54:00.349926 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-19 06:54:00.349938 | orchestrator | Friday 19 September 2025 06:53:43 +0000 (0:00:01.162) 0:00:06.500 ****** 2025-09-19 06:54:00.349949 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:54:00.349960 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:54:00.349971 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:54:00.349982 | orchestrator | changed: [testbed-manager] 2025-09-19 06:54:00.349993 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:54:00.350004 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:54:00.350015 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:54:00.350151 | orchestrator | 2025-09-19 06:54:00.350170 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-19 06:54:00.350188 | orchestrator | Friday 19 September 2025 06:53:44 +0000 (0:00:00.821) 0:00:07.321 ****** 2025-09-19 06:54:00.350205 | orchestrator | changed: [testbed-manager] 2025-09-19 06:54:00.350224 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:54:00.350244 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:54:00.350263 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:54:00.350282 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:54:00.350294 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:54:00.350305 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:54:00.350315 | orchestrator | 2025-09-19 06:54:00.350326 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-19 06:54:00.350337 | orchestrator | Friday 19 September 2025 06:53:56 +0000 (0:00:12.115) 0:00:19.436 ****** 2025-09-19 06:54:00.350349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 06:54:00.350360 | orchestrator | 2025-09-19 06:54:00.350371 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-19 06:54:00.350382 | orchestrator | Friday 19 September 2025 06:53:58 +0000 (0:00:01.417) 0:00:20.854 ****** 2025-09-19 06:54:00.350393 | orchestrator | changed: [testbed-manager] 2025-09-19 06:54:00.350404 | orchestrator | changed: [testbed-node-2] 2025-09-19 06:54:00.350415 | orchestrator | changed: [testbed-node-1] 2025-09-19 06:54:00.350425 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:54:00.350436 | orchestrator | changed: [testbed-node-0] 2025-09-19 06:54:00.350447 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:54:00.350457 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:54:00.350468 | orchestrator | 2025-09-19 06:54:00.350479 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:54:00.350490 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 06:54:00.350546 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:54:00.350569 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:54:00.350592 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:54:00.350603 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:54:00.350614 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:54:00.350625 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:54:00.350636 | orchestrator | 2025-09-19 06:54:00.350647 | orchestrator | 2025-09-19 06:54:00.350658 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:54:00.350669 | orchestrator | Friday 19 September 2025 06:54:00 +0000 (0:00:01.853) 0:00:22.707 ****** 2025-09-19 06:54:00.350680 | orchestrator | =============================================================================== 2025-09-19 06:54:00.350690 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.12s 2025-09-19 06:54:00.350701 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.02s 2025-09-19 06:54:00.350712 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.85s 2025-09-19 06:54:00.350723 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.42s 2025-09-19 06:54:00.350733 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2025-09-19 06:54:00.350744 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.16s 2025-09-19 06:54:00.350755 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.16s 2025-09-19 06:54:00.350766 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.82s 2025-09-19 06:54:00.350777 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.69s 2025-09-19 06:54:00.631855 | orchestrator | ++ semver latest 7.1.1 2025-09-19 06:54:00.685985 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-19 06:54:00.686118 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 06:54:00.686133 | orchestrator | + sudo systemctl restart manager.service 2025-09-19 06:54:27.292841 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 06:54:27.292983 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-19 06:54:27.293001 | orchestrator | + local max_attempts=60 2025-09-19 06:54:27.293014 | orchestrator | + local name=ceph-ansible 2025-09-19 06:54:27.293025 | orchestrator | + local attempt_num=1 2025-09-19 06:54:27.293037 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:27.338923 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:27.339016 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:27.339040 | orchestrator | + sleep 5 2025-09-19 06:54:32.342375 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:32.368837 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:32.368910 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:32.368923 | orchestrator | + sleep 5 2025-09-19 06:54:37.372048 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:37.401735 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:37.401821 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:37.401834 | orchestrator | + sleep 5 2025-09-19 06:54:42.404741 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:42.443040 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:42.443134 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:42.443149 | orchestrator | + sleep 5 2025-09-19 06:54:47.447035 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:47.494164 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:47.494239 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:47.494273 | orchestrator | + sleep 5 2025-09-19 06:54:52.498395 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:52.531809 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:52.531893 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:52.531908 | orchestrator | + sleep 5 2025-09-19 06:54:57.535859 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:54:57.579683 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:54:57.579775 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:54:57.579789 | orchestrator | + sleep 5 2025-09-19 06:55:02.583980 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:55:02.637302 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 06:55:02.637432 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:55:02.637457 | orchestrator | + sleep 5 2025-09-19 06:55:07.640313 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:55:07.675708 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 06:55:07.675750 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:55:07.675759 | orchestrator | + sleep 5 2025-09-19 06:55:12.678799 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:55:12.713410 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 06:55:12.713437 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:55:12.713442 | orchestrator | + sleep 5 2025-09-19 06:55:17.719088 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:55:17.763426 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 06:55:17.763555 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:55:17.763571 | orchestrator | + sleep 5 2025-09-19 06:55:22.768902 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:55:22.800709 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 06:55:22.800793 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:55:22.800807 | orchestrator | + sleep 5 2025-09-19 06:55:27.804789 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:55:27.843587 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 06:55:27.843674 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 06:55:27.843688 | orchestrator | + sleep 5 2025-09-19 06:55:32.848062 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 06:55:32.884567 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:55:32.884641 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-19 06:55:32.884656 | orchestrator | + local max_attempts=60 2025-09-19 06:55:32.884670 | orchestrator | + local name=kolla-ansible 2025-09-19 06:55:32.884682 | orchestrator | + local attempt_num=1 2025-09-19 06:55:32.885349 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-19 06:55:32.916823 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:55:32.916876 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-19 06:55:32.916892 | orchestrator | + local max_attempts=60 2025-09-19 06:55:32.916905 | orchestrator | + local name=osism-ansible 2025-09-19 06:55:32.916917 | orchestrator | + local attempt_num=1 2025-09-19 06:55:32.917716 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-19 06:55:32.946700 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 06:55:32.946727 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-19 06:55:32.946740 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-19 06:55:33.112346 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-19 06:55:33.285664 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-19 06:55:33.490299 | orchestrator | ARA in osism-ansible already disabled. 2025-09-19 06:55:33.675119 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-19 06:55:33.675475 | orchestrator | + osism apply gather-facts 2025-09-19 06:55:52.601189 | orchestrator | 2025-09-19 06:55:52 | INFO  | Task 16bbaece-7839-4c30-80b9-8ec3b7e79f3f (gather-facts) was prepared for execution. 2025-09-19 06:55:52.601301 | orchestrator | 2025-09-19 06:55:52 | INFO  | It takes a moment until task 16bbaece-7839-4c30-80b9-8ec3b7e79f3f (gather-facts) has been started and output is visible here. 2025-09-19 06:56:05.563951 | orchestrator | 2025-09-19 06:56:05.564090 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 06:56:05.564139 | orchestrator | 2025-09-19 06:56:05.564152 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 06:56:05.564164 | orchestrator | Friday 19 September 2025 06:55:56 +0000 (0:00:00.222) 0:00:00.222 ****** 2025-09-19 06:56:05.564175 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:56:05.564187 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:56:05.564198 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:56:05.564208 | orchestrator | ok: [testbed-manager] 2025-09-19 06:56:05.564219 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:56:05.564230 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:56:05.564240 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:56:05.564251 | orchestrator | 2025-09-19 06:56:05.564262 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 06:56:05.564273 | orchestrator | 2025-09-19 06:56:05.564284 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 06:56:05.564295 | orchestrator | Friday 19 September 2025 06:56:04 +0000 (0:00:08.208) 0:00:08.430 ****** 2025-09-19 06:56:05.564306 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:56:05.564318 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:56:05.564328 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:56:05.564339 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:56:05.564350 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:05.564361 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:05.564371 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:05.564382 | orchestrator | 2025-09-19 06:56:05.564393 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:56:05.564404 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:56:05.564417 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:56:05.564427 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:56:05.564438 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:56:05.564449 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:56:05.564460 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:56:05.564471 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 06:56:05.564482 | orchestrator | 2025-09-19 06:56:05.564493 | orchestrator | 2025-09-19 06:56:05.564503 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:56:05.564555 | orchestrator | Friday 19 September 2025 06:56:05 +0000 (0:00:00.506) 0:00:08.937 ****** 2025-09-19 06:56:05.564566 | orchestrator | =============================================================================== 2025-09-19 06:56:05.564577 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.21s 2025-09-19 06:56:05.564588 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-09-19 06:56:06.057361 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-19 06:56:06.071823 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-19 06:56:06.085431 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-19 06:56:06.106201 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-19 06:56:06.128285 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-19 06:56:06.150041 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-19 06:56:06.168533 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-19 06:56:06.189307 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-19 06:56:06.209677 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-19 06:56:06.231418 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-19 06:56:06.249203 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-19 06:56:06.263725 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-19 06:56:06.276774 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-19 06:56:06.295319 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-19 06:56:06.312318 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-19 06:56:06.329426 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-19 06:56:06.350446 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-19 06:56:06.371045 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-19 06:56:06.392807 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-19 06:56:06.415698 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-19 06:56:06.433717 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-19 06:56:06.753280 | orchestrator | ok: Runtime: 0:23:22.001670 2025-09-19 06:56:06.850526 | 2025-09-19 06:56:06.850660 | TASK [Deploy services] 2025-09-19 06:56:07.383460 | orchestrator | skipping: Conditional result was False 2025-09-19 06:56:07.401085 | 2025-09-19 06:56:07.401322 | TASK [Deploy in a nutshell] 2025-09-19 06:56:08.090940 | orchestrator | 2025-09-19 06:56:08.091109 | orchestrator | + set -e 2025-09-19 06:56:08.091136 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 06:56:08.091151 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 06:56:08.091172 | orchestrator | ++ INTERACTIVE=false 2025-09-19 06:56:08.091186 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 06:56:08.091200 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 06:56:08.091245 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 06:56:08.091267 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 06:56:08.091287 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 06:56:08.091300 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 06:56:08.091315 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 06:56:08.091327 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 06:56:08.091346 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 06:56:08.091357 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 06:56:08.091378 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 06:56:08.091389 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 06:56:08.091404 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 06:56:08.091415 | orchestrator | ++ export ARA=false 2025-09-19 06:56:08.091427 | orchestrator | ++ ARA=false 2025-09-19 06:56:08.091438 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 06:56:08.091450 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 06:56:08.091461 | orchestrator | ++ export TEMPEST=false 2025-09-19 06:56:08.091472 | orchestrator | ++ TEMPEST=false 2025-09-19 06:56:08.091482 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 06:56:08.091493 | orchestrator | ++ IS_ZUUL=true 2025-09-19 06:56:08.091504 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.132 2025-09-19 06:56:08.091546 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.132 2025-09-19 06:56:08.091557 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 06:56:08.091568 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 06:56:08.091579 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 06:56:08.091590 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 06:56:08.091601 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 06:56:08.091612 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 06:56:08.091623 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 06:56:08.091641 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 06:56:08.091653 | orchestrator | + echo 2025-09-19 06:56:08.091664 | orchestrator | + echo '# PULL IMAGES' 2025-09-19 06:56:08.091686 | orchestrator | # PULL IMAGES 2025-09-19 06:56:08.091698 | orchestrator | + echo 2025-09-19 06:56:08.091709 | orchestrator | 2025-09-19 06:56:08.092202 | orchestrator | ++ semver latest 7.0.0 2025-09-19 06:56:08.155899 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-19 06:56:08.155993 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 06:56:08.156010 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-19 06:56:10.074901 | orchestrator | 2025-09-19 06:56:10 | INFO  | Trying to run play pull-images in environment custom 2025-09-19 06:56:20.227891 | orchestrator | 2025-09-19 06:56:20 | INFO  | Task 396bc745-216f-4e7f-9a9f-fa93212c3fd4 (pull-images) was prepared for execution. 2025-09-19 06:56:20.228040 | orchestrator | 2025-09-19 06:56:20 | INFO  | Task 396bc745-216f-4e7f-9a9f-fa93212c3fd4 is running in background. No more output. Check ARA for logs. 2025-09-19 06:56:22.548889 | orchestrator | 2025-09-19 06:56:22 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-19 06:56:32.679695 | orchestrator | 2025-09-19 06:56:32 | INFO  | Task b7554c32-71a2-4b93-a24d-6c9f0de3d486 (wipe-partitions) was prepared for execution. 2025-09-19 06:56:32.679865 | orchestrator | 2025-09-19 06:56:32 | INFO  | It takes a moment until task b7554c32-71a2-4b93-a24d-6c9f0de3d486 (wipe-partitions) has been started and output is visible here. 2025-09-19 06:56:44.633603 | orchestrator | 2025-09-19 06:56:44.633719 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-19 06:56:44.633745 | orchestrator | 2025-09-19 06:56:44.633763 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-19 06:56:44.633787 | orchestrator | Friday 19 September 2025 06:56:36 +0000 (0:00:00.134) 0:00:00.134 ****** 2025-09-19 06:56:44.633808 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:56:44.633826 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:56:44.633842 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:56:44.633859 | orchestrator | 2025-09-19 06:56:44.633877 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-19 06:56:44.633924 | orchestrator | Friday 19 September 2025 06:56:37 +0000 (0:00:00.558) 0:00:00.692 ****** 2025-09-19 06:56:44.633943 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:44.633960 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:44.633981 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:44.633999 | orchestrator | 2025-09-19 06:56:44.634082 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-19 06:56:44.634103 | orchestrator | Friday 19 September 2025 06:56:37 +0000 (0:00:00.227) 0:00:00.920 ****** 2025-09-19 06:56:44.634122 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:56:44.634140 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:56:44.634198 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:56:44.634215 | orchestrator | 2025-09-19 06:56:44.634233 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-19 06:56:44.634250 | orchestrator | Friday 19 September 2025 06:56:38 +0000 (0:00:00.734) 0:00:01.654 ****** 2025-09-19 06:56:44.634267 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:56:44.634284 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:56:44.634300 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:56:44.634316 | orchestrator | 2025-09-19 06:56:44.634333 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-19 06:56:44.634351 | orchestrator | Friday 19 September 2025 06:56:38 +0000 (0:00:00.248) 0:00:01.903 ****** 2025-09-19 06:56:44.634367 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 06:56:44.634390 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 06:56:44.634408 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 06:56:44.634426 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 06:56:44.634443 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 06:56:44.634460 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 06:56:44.634476 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 06:56:44.634493 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 06:56:44.634553 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 06:56:44.634569 | orchestrator | 2025-09-19 06:56:44.634584 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-19 06:56:44.634600 | orchestrator | Friday 19 September 2025 06:56:39 +0000 (0:00:01.169) 0:00:03.073 ****** 2025-09-19 06:56:44.634615 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 06:56:44.634631 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 06:56:44.634648 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 06:56:44.634663 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 06:56:44.634678 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 06:56:44.634694 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 06:56:44.634710 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 06:56:44.634727 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 06:56:44.634743 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 06:56:44.634759 | orchestrator | 2025-09-19 06:56:44.634775 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-19 06:56:44.634791 | orchestrator | Friday 19 September 2025 06:56:40 +0000 (0:00:01.299) 0:00:04.372 ****** 2025-09-19 06:56:44.634806 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 06:56:44.634822 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 06:56:44.634837 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 06:56:44.634852 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 06:56:44.634869 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 06:56:44.634898 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 06:56:44.634916 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 06:56:44.634949 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 06:56:44.634965 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 06:56:44.634981 | orchestrator | 2025-09-19 06:56:44.634996 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-19 06:56:44.635013 | orchestrator | Friday 19 September 2025 06:56:43 +0000 (0:00:02.164) 0:00:06.537 ****** 2025-09-19 06:56:44.635029 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:56:44.635046 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:56:44.635062 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:56:44.635078 | orchestrator | 2025-09-19 06:56:44.635088 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-19 06:56:44.635098 | orchestrator | Friday 19 September 2025 06:56:43 +0000 (0:00:00.630) 0:00:07.167 ****** 2025-09-19 06:56:44.635108 | orchestrator | changed: [testbed-node-3] 2025-09-19 06:56:44.635117 | orchestrator | changed: [testbed-node-4] 2025-09-19 06:56:44.635126 | orchestrator | changed: [testbed-node-5] 2025-09-19 06:56:44.635136 | orchestrator | 2025-09-19 06:56:44.635145 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:56:44.635159 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:56:44.635170 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:56:44.635204 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:56:44.635214 | orchestrator | 2025-09-19 06:56:44.635224 | orchestrator | 2025-09-19 06:56:44.635233 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:56:44.635243 | orchestrator | Friday 19 September 2025 06:56:44 +0000 (0:00:00.628) 0:00:07.796 ****** 2025-09-19 06:56:44.635252 | orchestrator | =============================================================================== 2025-09-19 06:56:44.635262 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.16s 2025-09-19 06:56:44.635271 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.30s 2025-09-19 06:56:44.635281 | orchestrator | Check device availability ----------------------------------------------- 1.17s 2025-09-19 06:56:44.635290 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.73s 2025-09-19 06:56:44.635299 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2025-09-19 06:56:44.635309 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-09-19 06:56:44.635318 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.56s 2025-09-19 06:56:44.635328 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-09-19 06:56:44.635343 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2025-09-19 06:56:56.853468 | orchestrator | 2025-09-19 06:56:56 | INFO  | Task cff93b85-8298-41ba-86fc-5bf97a8e4a9c (facts) was prepared for execution. 2025-09-19 06:56:56.853610 | orchestrator | 2025-09-19 06:56:56 | INFO  | It takes a moment until task cff93b85-8298-41ba-86fc-5bf97a8e4a9c (facts) has been started and output is visible here. 2025-09-19 06:57:08.933118 | orchestrator | 2025-09-19 06:57:08.933253 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 06:57:08.933277 | orchestrator | 2025-09-19 06:57:08.933294 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 06:57:08.933310 | orchestrator | Friday 19 September 2025 06:57:00 +0000 (0:00:00.285) 0:00:00.285 ****** 2025-09-19 06:57:08.933326 | orchestrator | ok: [testbed-manager] 2025-09-19 06:57:08.933344 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:57:08.933361 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:57:08.933407 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:57:08.933424 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:57:08.933440 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:57:08.933456 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:57:08.933472 | orchestrator | 2025-09-19 06:57:08.933554 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 06:57:08.933573 | orchestrator | Friday 19 September 2025 06:57:02 +0000 (0:00:01.080) 0:00:01.366 ****** 2025-09-19 06:57:08.933589 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:57:08.933606 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:57:08.933622 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:57:08.933637 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:57:08.933653 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:08.933669 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:08.933685 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:08.933701 | orchestrator | 2025-09-19 06:57:08.933717 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 06:57:08.933731 | orchestrator | 2025-09-19 06:57:08.933746 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 06:57:08.933762 | orchestrator | Friday 19 September 2025 06:57:03 +0000 (0:00:01.226) 0:00:02.592 ****** 2025-09-19 06:57:08.933779 | orchestrator | ok: [testbed-node-1] 2025-09-19 06:57:08.933796 | orchestrator | ok: [testbed-node-0] 2025-09-19 06:57:08.933815 | orchestrator | ok: [testbed-manager] 2025-09-19 06:57:08.933833 | orchestrator | ok: [testbed-node-2] 2025-09-19 06:57:08.933849 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:57:08.933861 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:57:08.933872 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:57:08.933883 | orchestrator | 2025-09-19 06:57:08.933894 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 06:57:08.933905 | orchestrator | 2025-09-19 06:57:08.933916 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 06:57:08.933948 | orchestrator | Friday 19 September 2025 06:57:07 +0000 (0:00:04.674) 0:00:07.267 ****** 2025-09-19 06:57:08.933960 | orchestrator | skipping: [testbed-manager] 2025-09-19 06:57:08.933971 | orchestrator | skipping: [testbed-node-0] 2025-09-19 06:57:08.933982 | orchestrator | skipping: [testbed-node-1] 2025-09-19 06:57:08.933993 | orchestrator | skipping: [testbed-node-2] 2025-09-19 06:57:08.934004 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:08.934014 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:08.934101 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:08.934155 | orchestrator | 2025-09-19 06:57:08.934165 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:57:08.934176 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:57:08.934187 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:57:08.934197 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:57:08.934207 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:57:08.934216 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:57:08.934226 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:57:08.934236 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 06:57:08.934246 | orchestrator | 2025-09-19 06:57:08.934269 | orchestrator | 2025-09-19 06:57:08.934279 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:57:08.934288 | orchestrator | Friday 19 September 2025 06:57:08 +0000 (0:00:00.659) 0:00:07.926 ****** 2025-09-19 06:57:08.934298 | orchestrator | =============================================================================== 2025-09-19 06:57:08.934307 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.67s 2025-09-19 06:57:08.934317 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2025-09-19 06:57:08.934326 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2025-09-19 06:57:08.934336 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.66s 2025-09-19 06:57:11.290988 | orchestrator | 2025-09-19 06:57:11 | INFO  | Task b4e643ba-3c15-4633-8848-6e776a683ea3 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-19 06:57:11.291081 | orchestrator | 2025-09-19 06:57:11 | INFO  | It takes a moment until task b4e643ba-3c15-4633-8848-6e776a683ea3 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-19 06:57:22.903306 | orchestrator | 2025-09-19 06:57:22.903415 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 06:57:22.903431 | orchestrator | 2025-09-19 06:57:22.903443 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 06:57:22.903458 | orchestrator | Friday 19 September 2025 06:57:15 +0000 (0:00:00.336) 0:00:00.336 ****** 2025-09-19 06:57:22.903470 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 06:57:22.903509 | orchestrator | 2025-09-19 06:57:22.903521 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 06:57:22.903533 | orchestrator | Friday 19 September 2025 06:57:15 +0000 (0:00:00.255) 0:00:00.591 ****** 2025-09-19 06:57:22.903544 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:57:22.903556 | orchestrator | 2025-09-19 06:57:22.903567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:22.903578 | orchestrator | Friday 19 September 2025 06:57:15 +0000 (0:00:00.227) 0:00:00.818 ****** 2025-09-19 06:57:22.903589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-19 06:57:22.903601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-19 06:57:22.903612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-19 06:57:22.903623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-19 06:57:22.903634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-19 06:57:22.903645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-19 06:57:22.903656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-19 06:57:22.903667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-19 06:57:22.903678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-19 06:57:22.903696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-19 06:57:22.903715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-19 06:57:22.903752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-19 06:57:22.903777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-19 06:57:22.903794 | orchestrator | 2025-09-19 06:57:22.903812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:22.903830 | orchestrator | Friday 19 September 2025 06:57:16 +0000 (0:00:00.365) 0:00:01.184 ****** 2025-09-19 06:57:22.903846 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.903886 | orchestrator | 2025-09-19 06:57:22.903905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:22.903924 | orchestrator | Friday 19 September 2025 06:57:16 +0000 (0:00:00.474) 0:00:01.659 ****** 2025-09-19 06:57:22.903946 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.903965 | orchestrator | 2025-09-19 06:57:22.903983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:22.904002 | orchestrator | Friday 19 September 2025 06:57:16 +0000 (0:00:00.202) 0:00:01.861 ****** 2025-09-19 06:57:22.904021 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.904040 | orchestrator | 2025-09-19 06:57:22.904061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:22.904081 | orchestrator | Friday 19 September 2025 06:57:17 +0000 (0:00:00.187) 0:00:02.049 ****** 2025-09-19 06:57:22.904101 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.904125 | orchestrator | 2025-09-19 06:57:22.904137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:22.904148 | orchestrator | Friday 19 September 2025 06:57:17 +0000 (0:00:00.183) 0:00:02.233 ****** 2025-09-19 06:57:22.904159 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.904170 | orchestrator | 2025-09-19 06:57:22.904181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:22.904192 | orchestrator | Friday 19 September 2025 06:57:17 +0000 (0:00:00.207) 0:00:02.441 ****** 2025-09-19 06:57:22.904203 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.904213 | orchestrator | 2025-09-19 06:57:22.904224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:22.904235 | orchestrator | Friday 19 September 2025 06:57:17 +0000 (0:00:00.188) 0:00:02.630 ****** 2025-09-19 06:57:22.904245 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.904256 | orchestrator | 2025-09-19 06:57:22.904267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:22.904278 | orchestrator | Friday 19 September 2025 06:57:17 +0000 (0:00:00.198) 0:00:02.828 ****** 2025-09-19 06:57:22.904289 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.904300 | orchestrator | 2025-09-19 06:57:22.904311 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:22.904321 | orchestrator | Friday 19 September 2025 06:57:18 +0000 (0:00:00.196) 0:00:03.024 ****** 2025-09-19 06:57:22.904332 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37) 2025-09-19 06:57:22.904345 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37) 2025-09-19 06:57:22.904355 | orchestrator | 2025-09-19 06:57:22.904366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:22.904377 | orchestrator | Friday 19 September 2025 06:57:18 +0000 (0:00:00.431) 0:00:03.456 ****** 2025-09-19 06:57:22.904408 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4dd49722-42e6-4e94-9106-a95d5116fdb0) 2025-09-19 06:57:22.904420 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4dd49722-42e6-4e94-9106-a95d5116fdb0) 2025-09-19 06:57:22.904430 | orchestrator | 2025-09-19 06:57:22.904441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:22.904452 | orchestrator | Friday 19 September 2025 06:57:18 +0000 (0:00:00.409) 0:00:03.865 ****** 2025-09-19 06:57:22.904463 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1cf24504-b3f3-4e87-bda4-4a150d83b5cd) 2025-09-19 06:57:22.904474 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1cf24504-b3f3-4e87-bda4-4a150d83b5cd) 2025-09-19 06:57:22.904514 | orchestrator | 2025-09-19 06:57:22.904525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:22.904536 | orchestrator | Friday 19 September 2025 06:57:19 +0000 (0:00:00.605) 0:00:04.471 ****** 2025-09-19 06:57:22.904546 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5b11ce89-f193-4587-acb9-80845fc85b80) 2025-09-19 06:57:22.904568 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5b11ce89-f193-4587-acb9-80845fc85b80) 2025-09-19 06:57:22.904579 | orchestrator | 2025-09-19 06:57:22.904590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:22.904601 | orchestrator | Friday 19 September 2025 06:57:20 +0000 (0:00:00.608) 0:00:05.080 ****** 2025-09-19 06:57:22.904611 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 06:57:22.904622 | orchestrator | 2025-09-19 06:57:22.904633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:22.904650 | orchestrator | Friday 19 September 2025 06:57:20 +0000 (0:00:00.727) 0:00:05.808 ****** 2025-09-19 06:57:22.904662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-19 06:57:22.904673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-19 06:57:22.904684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-19 06:57:22.904694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-19 06:57:22.904705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-19 06:57:22.904716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-19 06:57:22.904726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-19 06:57:22.904738 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-19 06:57:22.904748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-19 06:57:22.904759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-19 06:57:22.904770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-19 06:57:22.904780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-19 06:57:22.904791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-19 06:57:22.904802 | orchestrator | 2025-09-19 06:57:22.904813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:22.904824 | orchestrator | Friday 19 September 2025 06:57:21 +0000 (0:00:00.373) 0:00:06.182 ****** 2025-09-19 06:57:22.904834 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.904845 | orchestrator | 2025-09-19 06:57:22.904856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:22.904867 | orchestrator | Friday 19 September 2025 06:57:21 +0000 (0:00:00.205) 0:00:06.387 ****** 2025-09-19 06:57:22.904878 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.904889 | orchestrator | 2025-09-19 06:57:22.904900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:22.904910 | orchestrator | Friday 19 September 2025 06:57:21 +0000 (0:00:00.192) 0:00:06.580 ****** 2025-09-19 06:57:22.904921 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.904932 | orchestrator | 2025-09-19 06:57:22.904943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:22.904954 | orchestrator | Friday 19 September 2025 06:57:21 +0000 (0:00:00.204) 0:00:06.784 ****** 2025-09-19 06:57:22.904964 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.904975 | orchestrator | 2025-09-19 06:57:22.904986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:22.904997 | orchestrator | Friday 19 September 2025 06:57:22 +0000 (0:00:00.206) 0:00:06.991 ****** 2025-09-19 06:57:22.905008 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.905019 | orchestrator | 2025-09-19 06:57:22.905036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:22.905047 | orchestrator | Friday 19 September 2025 06:57:22 +0000 (0:00:00.199) 0:00:07.190 ****** 2025-09-19 06:57:22.905057 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.905068 | orchestrator | 2025-09-19 06:57:22.905079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:22.905089 | orchestrator | Friday 19 September 2025 06:57:22 +0000 (0:00:00.198) 0:00:07.389 ****** 2025-09-19 06:57:22.905100 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:22.905111 | orchestrator | 2025-09-19 06:57:22.905122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:22.905133 | orchestrator | Friday 19 September 2025 06:57:22 +0000 (0:00:00.190) 0:00:07.580 ****** 2025-09-19 06:57:22.905151 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.379538 | orchestrator | 2025-09-19 06:57:30.379646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:30.379663 | orchestrator | Friday 19 September 2025 06:57:22 +0000 (0:00:00.185) 0:00:07.765 ****** 2025-09-19 06:57:30.379675 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-19 06:57:30.379692 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-19 06:57:30.379712 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-19 06:57:30.379731 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-19 06:57:30.379750 | orchestrator | 2025-09-19 06:57:30.379768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:30.379785 | orchestrator | Friday 19 September 2025 06:57:23 +0000 (0:00:01.039) 0:00:08.805 ****** 2025-09-19 06:57:30.379796 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.379808 | orchestrator | 2025-09-19 06:57:30.379818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:30.379830 | orchestrator | Friday 19 September 2025 06:57:24 +0000 (0:00:00.224) 0:00:09.030 ****** 2025-09-19 06:57:30.379841 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.379851 | orchestrator | 2025-09-19 06:57:30.379862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:30.379873 | orchestrator | Friday 19 September 2025 06:57:24 +0000 (0:00:00.212) 0:00:09.243 ****** 2025-09-19 06:57:30.379884 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.379895 | orchestrator | 2025-09-19 06:57:30.379906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:30.379917 | orchestrator | Friday 19 September 2025 06:57:24 +0000 (0:00:00.206) 0:00:09.449 ****** 2025-09-19 06:57:30.379931 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.379949 | orchestrator | 2025-09-19 06:57:30.379967 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 06:57:30.379986 | orchestrator | Friday 19 September 2025 06:57:24 +0000 (0:00:00.209) 0:00:09.659 ****** 2025-09-19 06:57:30.380004 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-19 06:57:30.380022 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-19 06:57:30.380035 | orchestrator | 2025-09-19 06:57:30.380048 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 06:57:30.380060 | orchestrator | Friday 19 September 2025 06:57:24 +0000 (0:00:00.179) 0:00:09.839 ****** 2025-09-19 06:57:30.380094 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.380106 | orchestrator | 2025-09-19 06:57:30.380119 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 06:57:30.380131 | orchestrator | Friday 19 September 2025 06:57:25 +0000 (0:00:00.132) 0:00:09.971 ****** 2025-09-19 06:57:30.380143 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.380155 | orchestrator | 2025-09-19 06:57:30.380168 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 06:57:30.380180 | orchestrator | Friday 19 September 2025 06:57:25 +0000 (0:00:00.123) 0:00:10.095 ****** 2025-09-19 06:57:30.380193 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.380230 | orchestrator | 2025-09-19 06:57:30.380243 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 06:57:30.380255 | orchestrator | Friday 19 September 2025 06:57:25 +0000 (0:00:00.123) 0:00:10.218 ****** 2025-09-19 06:57:30.380267 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:57:30.380278 | orchestrator | 2025-09-19 06:57:30.380289 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 06:57:30.380300 | orchestrator | Friday 19 September 2025 06:57:25 +0000 (0:00:00.136) 0:00:10.355 ****** 2025-09-19 06:57:30.380311 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'deb73447-54c2-58c6-89f8-2e63b50c59b2'}}) 2025-09-19 06:57:30.380322 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'}}) 2025-09-19 06:57:30.380333 | orchestrator | 2025-09-19 06:57:30.380344 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 06:57:30.380354 | orchestrator | Friday 19 September 2025 06:57:25 +0000 (0:00:00.164) 0:00:10.520 ****** 2025-09-19 06:57:30.380365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'deb73447-54c2-58c6-89f8-2e63b50c59b2'}})  2025-09-19 06:57:30.380384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'}})  2025-09-19 06:57:30.380395 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.380406 | orchestrator | 2025-09-19 06:57:30.380417 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 06:57:30.380428 | orchestrator | Friday 19 September 2025 06:57:25 +0000 (0:00:00.142) 0:00:10.662 ****** 2025-09-19 06:57:30.380439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'deb73447-54c2-58c6-89f8-2e63b50c59b2'}})  2025-09-19 06:57:30.380449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'}})  2025-09-19 06:57:30.380460 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.380471 | orchestrator | 2025-09-19 06:57:30.380508 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 06:57:30.380519 | orchestrator | Friday 19 September 2025 06:57:26 +0000 (0:00:00.349) 0:00:11.012 ****** 2025-09-19 06:57:30.380529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'deb73447-54c2-58c6-89f8-2e63b50c59b2'}})  2025-09-19 06:57:30.380540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'}})  2025-09-19 06:57:30.380551 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.380562 | orchestrator | 2025-09-19 06:57:30.380591 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 06:57:30.380603 | orchestrator | Friday 19 September 2025 06:57:26 +0000 (0:00:00.141) 0:00:11.153 ****** 2025-09-19 06:57:30.380614 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:57:30.380625 | orchestrator | 2025-09-19 06:57:30.380636 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 06:57:30.380652 | orchestrator | Friday 19 September 2025 06:57:26 +0000 (0:00:00.123) 0:00:11.276 ****** 2025-09-19 06:57:30.380664 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:57:30.380675 | orchestrator | 2025-09-19 06:57:30.380685 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 06:57:30.380696 | orchestrator | Friday 19 September 2025 06:57:26 +0000 (0:00:00.162) 0:00:11.439 ****** 2025-09-19 06:57:30.380707 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.380718 | orchestrator | 2025-09-19 06:57:30.380729 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 06:57:30.380740 | orchestrator | Friday 19 September 2025 06:57:26 +0000 (0:00:00.141) 0:00:11.580 ****** 2025-09-19 06:57:30.380750 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.380761 | orchestrator | 2025-09-19 06:57:30.380780 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 06:57:30.380791 | orchestrator | Friday 19 September 2025 06:57:26 +0000 (0:00:00.134) 0:00:11.715 ****** 2025-09-19 06:57:30.380802 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.380813 | orchestrator | 2025-09-19 06:57:30.380824 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 06:57:30.380835 | orchestrator | Friday 19 September 2025 06:57:26 +0000 (0:00:00.138) 0:00:11.854 ****** 2025-09-19 06:57:30.380846 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 06:57:30.380857 | orchestrator |  "ceph_osd_devices": { 2025-09-19 06:57:30.380868 | orchestrator |  "sdb": { 2025-09-19 06:57:30.380879 | orchestrator |  "osd_lvm_uuid": "deb73447-54c2-58c6-89f8-2e63b50c59b2" 2025-09-19 06:57:30.380891 | orchestrator |  }, 2025-09-19 06:57:30.380902 | orchestrator |  "sdc": { 2025-09-19 06:57:30.380913 | orchestrator |  "osd_lvm_uuid": "6d43fc0f-0470-50ff-9d43-3faecb8a0ab1" 2025-09-19 06:57:30.380924 | orchestrator |  } 2025-09-19 06:57:30.380934 | orchestrator |  } 2025-09-19 06:57:30.380945 | orchestrator | } 2025-09-19 06:57:30.380956 | orchestrator | 2025-09-19 06:57:30.380967 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 06:57:30.380978 | orchestrator | Friday 19 September 2025 06:57:27 +0000 (0:00:00.135) 0:00:11.989 ****** 2025-09-19 06:57:30.380989 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.381000 | orchestrator | 2025-09-19 06:57:30.381010 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 06:57:30.381021 | orchestrator | Friday 19 September 2025 06:57:27 +0000 (0:00:00.133) 0:00:12.122 ****** 2025-09-19 06:57:30.381032 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.381043 | orchestrator | 2025-09-19 06:57:30.381053 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 06:57:30.381064 | orchestrator | Friday 19 September 2025 06:57:27 +0000 (0:00:00.131) 0:00:12.254 ****** 2025-09-19 06:57:30.381075 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:57:30.381086 | orchestrator | 2025-09-19 06:57:30.381096 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 06:57:30.381107 | orchestrator | Friday 19 September 2025 06:57:27 +0000 (0:00:00.131) 0:00:12.385 ****** 2025-09-19 06:57:30.381118 | orchestrator | changed: [testbed-node-3] => { 2025-09-19 06:57:30.381129 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 06:57:30.381140 | orchestrator |  "ceph_osd_devices": { 2025-09-19 06:57:30.381151 | orchestrator |  "sdb": { 2025-09-19 06:57:30.381162 | orchestrator |  "osd_lvm_uuid": "deb73447-54c2-58c6-89f8-2e63b50c59b2" 2025-09-19 06:57:30.381173 | orchestrator |  }, 2025-09-19 06:57:30.381184 | orchestrator |  "sdc": { 2025-09-19 06:57:30.381195 | orchestrator |  "osd_lvm_uuid": "6d43fc0f-0470-50ff-9d43-3faecb8a0ab1" 2025-09-19 06:57:30.381206 | orchestrator |  } 2025-09-19 06:57:30.381217 | orchestrator |  }, 2025-09-19 06:57:30.381228 | orchestrator |  "lvm_volumes": [ 2025-09-19 06:57:30.381238 | orchestrator |  { 2025-09-19 06:57:30.381250 | orchestrator |  "data": "osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2", 2025-09-19 06:57:30.381261 | orchestrator |  "data_vg": "ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2" 2025-09-19 06:57:30.381271 | orchestrator |  }, 2025-09-19 06:57:30.381282 | orchestrator |  { 2025-09-19 06:57:30.381293 | orchestrator |  "data": "osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1", 2025-09-19 06:57:30.381304 | orchestrator |  "data_vg": "ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1" 2025-09-19 06:57:30.381314 | orchestrator |  } 2025-09-19 06:57:30.381325 | orchestrator |  ] 2025-09-19 06:57:30.381336 | orchestrator |  } 2025-09-19 06:57:30.381347 | orchestrator | } 2025-09-19 06:57:30.381358 | orchestrator | 2025-09-19 06:57:30.381369 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 06:57:30.381390 | orchestrator | Friday 19 September 2025 06:57:27 +0000 (0:00:00.207) 0:00:12.593 ****** 2025-09-19 06:57:30.381402 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 06:57:30.381412 | orchestrator | 2025-09-19 06:57:30.381423 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 06:57:30.381434 | orchestrator | 2025-09-19 06:57:30.381445 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 06:57:30.381456 | orchestrator | Friday 19 September 2025 06:57:29 +0000 (0:00:02.158) 0:00:14.751 ****** 2025-09-19 06:57:30.381467 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 06:57:30.381549 | orchestrator | 2025-09-19 06:57:30.381561 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 06:57:30.381572 | orchestrator | Friday 19 September 2025 06:57:30 +0000 (0:00:00.254) 0:00:15.005 ****** 2025-09-19 06:57:30.381583 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:57:30.381594 | orchestrator | 2025-09-19 06:57:30.381605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:30.381623 | orchestrator | Friday 19 September 2025 06:57:30 +0000 (0:00:00.237) 0:00:15.243 ****** 2025-09-19 06:57:38.406995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-19 06:57:38.407133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-19 06:57:38.407159 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-19 06:57:38.407175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-19 06:57:38.407187 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-19 06:57:38.407198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-19 06:57:38.407208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-19 06:57:38.407219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-19 06:57:38.407230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-19 06:57:38.407241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-19 06:57:38.407251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-19 06:57:38.407262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-19 06:57:38.407272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-19 06:57:38.407292 | orchestrator | 2025-09-19 06:57:38.407312 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:38.407332 | orchestrator | Friday 19 September 2025 06:57:30 +0000 (0:00:00.397) 0:00:15.640 ****** 2025-09-19 06:57:38.407351 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.407369 | orchestrator | 2025-09-19 06:57:38.407387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:38.407405 | orchestrator | Friday 19 September 2025 06:57:30 +0000 (0:00:00.199) 0:00:15.839 ****** 2025-09-19 06:57:38.407424 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.407444 | orchestrator | 2025-09-19 06:57:38.407462 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:38.407511 | orchestrator | Friday 19 September 2025 06:57:31 +0000 (0:00:00.215) 0:00:16.054 ****** 2025-09-19 06:57:38.407525 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.407537 | orchestrator | 2025-09-19 06:57:38.407550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:38.407563 | orchestrator | Friday 19 September 2025 06:57:31 +0000 (0:00:00.214) 0:00:16.269 ****** 2025-09-19 06:57:38.407575 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.407612 | orchestrator | 2025-09-19 06:57:38.407626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:38.407638 | orchestrator | Friday 19 September 2025 06:57:31 +0000 (0:00:00.205) 0:00:16.474 ****** 2025-09-19 06:57:38.407650 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.407662 | orchestrator | 2025-09-19 06:57:38.407673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:38.407684 | orchestrator | Friday 19 September 2025 06:57:32 +0000 (0:00:00.664) 0:00:17.139 ****** 2025-09-19 06:57:38.407694 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.407705 | orchestrator | 2025-09-19 06:57:38.407716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:38.407726 | orchestrator | Friday 19 September 2025 06:57:32 +0000 (0:00:00.247) 0:00:17.386 ****** 2025-09-19 06:57:38.407737 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.407748 | orchestrator | 2025-09-19 06:57:38.407777 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:38.407788 | orchestrator | Friday 19 September 2025 06:57:32 +0000 (0:00:00.232) 0:00:17.619 ****** 2025-09-19 06:57:38.407799 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.407809 | orchestrator | 2025-09-19 06:57:38.407820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:38.407830 | orchestrator | Friday 19 September 2025 06:57:32 +0000 (0:00:00.190) 0:00:17.809 ****** 2025-09-19 06:57:38.407841 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387) 2025-09-19 06:57:38.407854 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387) 2025-09-19 06:57:38.407864 | orchestrator | 2025-09-19 06:57:38.407875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:38.407886 | orchestrator | Friday 19 September 2025 06:57:33 +0000 (0:00:00.402) 0:00:18.212 ****** 2025-09-19 06:57:38.407897 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c93c054d-d324-48de-9f46-886df7842ff7) 2025-09-19 06:57:38.407907 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c93c054d-d324-48de-9f46-886df7842ff7) 2025-09-19 06:57:38.407918 | orchestrator | 2025-09-19 06:57:38.407929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:38.407940 | orchestrator | Friday 19 September 2025 06:57:33 +0000 (0:00:00.432) 0:00:18.644 ****** 2025-09-19 06:57:38.407950 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_38f6fb83-908a-4dc2-a0dd-a3bb8d4e5dee) 2025-09-19 06:57:38.407961 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_38f6fb83-908a-4dc2-a0dd-a3bb8d4e5dee) 2025-09-19 06:57:38.407972 | orchestrator | 2025-09-19 06:57:38.407982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:38.407993 | orchestrator | Friday 19 September 2025 06:57:34 +0000 (0:00:00.416) 0:00:19.061 ****** 2025-09-19 06:57:38.408037 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b81412c7-c90d-434c-bce7-fcbaa76ae3c0) 2025-09-19 06:57:38.408055 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b81412c7-c90d-434c-bce7-fcbaa76ae3c0) 2025-09-19 06:57:38.408072 | orchestrator | 2025-09-19 06:57:38.408089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:38.408107 | orchestrator | Friday 19 September 2025 06:57:34 +0000 (0:00:00.434) 0:00:19.496 ****** 2025-09-19 06:57:38.408123 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 06:57:38.408139 | orchestrator | 2025-09-19 06:57:38.408154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:38.408170 | orchestrator | Friday 19 September 2025 06:57:34 +0000 (0:00:00.319) 0:00:19.815 ****** 2025-09-19 06:57:38.408186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-19 06:57:38.408219 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-19 06:57:38.408235 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-19 06:57:38.408252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-19 06:57:38.408270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-19 06:57:38.408289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-19 06:57:38.408307 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-19 06:57:38.408325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-19 06:57:38.408341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-19 06:57:38.408359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-19 06:57:38.408376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-19 06:57:38.408394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-19 06:57:38.408412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-19 06:57:38.408429 | orchestrator | 2025-09-19 06:57:38.408445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:38.408464 | orchestrator | Friday 19 September 2025 06:57:35 +0000 (0:00:00.385) 0:00:20.200 ****** 2025-09-19 06:57:38.408510 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.408529 | orchestrator | 2025-09-19 06:57:38.408546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:38.408563 | orchestrator | Friday 19 September 2025 06:57:35 +0000 (0:00:00.185) 0:00:20.386 ****** 2025-09-19 06:57:38.408581 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.408599 | orchestrator | 2025-09-19 06:57:38.408617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:38.408634 | orchestrator | Friday 19 September 2025 06:57:36 +0000 (0:00:00.767) 0:00:21.153 ****** 2025-09-19 06:57:38.408663 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.408684 | orchestrator | 2025-09-19 06:57:38.408702 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:38.408719 | orchestrator | Friday 19 September 2025 06:57:36 +0000 (0:00:00.229) 0:00:21.383 ****** 2025-09-19 06:57:38.408739 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.408757 | orchestrator | 2025-09-19 06:57:38.408776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:38.408796 | orchestrator | Friday 19 September 2025 06:57:36 +0000 (0:00:00.210) 0:00:21.593 ****** 2025-09-19 06:57:38.408814 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.408832 | orchestrator | 2025-09-19 06:57:38.408851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:38.408869 | orchestrator | Friday 19 September 2025 06:57:36 +0000 (0:00:00.211) 0:00:21.804 ****** 2025-09-19 06:57:38.408887 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.408903 | orchestrator | 2025-09-19 06:57:38.408921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:38.408938 | orchestrator | Friday 19 September 2025 06:57:37 +0000 (0:00:00.207) 0:00:22.012 ****** 2025-09-19 06:57:38.408956 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.408973 | orchestrator | 2025-09-19 06:57:38.408991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:38.409009 | orchestrator | Friday 19 September 2025 06:57:37 +0000 (0:00:00.196) 0:00:22.208 ****** 2025-09-19 06:57:38.409027 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.409044 | orchestrator | 2025-09-19 06:57:38.409064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:38.409096 | orchestrator | Friday 19 September 2025 06:57:37 +0000 (0:00:00.200) 0:00:22.409 ****** 2025-09-19 06:57:38.409115 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-19 06:57:38.409134 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-19 06:57:38.409153 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-19 06:57:38.409171 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-19 06:57:38.409190 | orchestrator | 2025-09-19 06:57:38.409208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:38.409225 | orchestrator | Friday 19 September 2025 06:57:38 +0000 (0:00:00.653) 0:00:23.062 ****** 2025-09-19 06:57:38.409242 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:38.409259 | orchestrator | 2025-09-19 06:57:38.409297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:43.981367 | orchestrator | Friday 19 September 2025 06:57:38 +0000 (0:00:00.209) 0:00:23.271 ****** 2025-09-19 06:57:43.981464 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.981523 | orchestrator | 2025-09-19 06:57:43.981542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:43.981553 | orchestrator | Friday 19 September 2025 06:57:38 +0000 (0:00:00.199) 0:00:23.471 ****** 2025-09-19 06:57:43.981564 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.981575 | orchestrator | 2025-09-19 06:57:43.981586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:43.981598 | orchestrator | Friday 19 September 2025 06:57:38 +0000 (0:00:00.199) 0:00:23.671 ****** 2025-09-19 06:57:43.981608 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.981619 | orchestrator | 2025-09-19 06:57:43.981630 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 06:57:43.981641 | orchestrator | Friday 19 September 2025 06:57:39 +0000 (0:00:00.208) 0:00:23.879 ****** 2025-09-19 06:57:43.981652 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-19 06:57:43.981663 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-19 06:57:43.981674 | orchestrator | 2025-09-19 06:57:43.981685 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 06:57:43.981696 | orchestrator | Friday 19 September 2025 06:57:39 +0000 (0:00:00.363) 0:00:24.243 ****** 2025-09-19 06:57:43.981707 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.981718 | orchestrator | 2025-09-19 06:57:43.981729 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 06:57:43.981740 | orchestrator | Friday 19 September 2025 06:57:39 +0000 (0:00:00.146) 0:00:24.389 ****** 2025-09-19 06:57:43.981751 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.981762 | orchestrator | 2025-09-19 06:57:43.981773 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 06:57:43.981784 | orchestrator | Friday 19 September 2025 06:57:39 +0000 (0:00:00.140) 0:00:24.530 ****** 2025-09-19 06:57:43.981795 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.981806 | orchestrator | 2025-09-19 06:57:43.981817 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 06:57:43.981828 | orchestrator | Friday 19 September 2025 06:57:39 +0000 (0:00:00.138) 0:00:24.668 ****** 2025-09-19 06:57:43.981839 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:57:43.981850 | orchestrator | 2025-09-19 06:57:43.981861 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 06:57:43.981872 | orchestrator | Friday 19 September 2025 06:57:39 +0000 (0:00:00.137) 0:00:24.806 ****** 2025-09-19 06:57:43.981884 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '05a06e17-0162-5722-bf4c-f18a4cab61c7'}}) 2025-09-19 06:57:43.981895 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'caff573e-485a-5d29-90dc-90eefd21fd68'}}) 2025-09-19 06:57:43.981906 | orchestrator | 2025-09-19 06:57:43.981917 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 06:57:43.981951 | orchestrator | Friday 19 September 2025 06:57:40 +0000 (0:00:00.174) 0:00:24.981 ****** 2025-09-19 06:57:43.981964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '05a06e17-0162-5722-bf4c-f18a4cab61c7'}})  2025-09-19 06:57:43.981977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'caff573e-485a-5d29-90dc-90eefd21fd68'}})  2025-09-19 06:57:43.981989 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.982002 | orchestrator | 2025-09-19 06:57:43.982062 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 06:57:43.982076 | orchestrator | Friday 19 September 2025 06:57:40 +0000 (0:00:00.156) 0:00:25.137 ****** 2025-09-19 06:57:43.982104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '05a06e17-0162-5722-bf4c-f18a4cab61c7'}})  2025-09-19 06:57:43.982117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'caff573e-485a-5d29-90dc-90eefd21fd68'}})  2025-09-19 06:57:43.982129 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.982141 | orchestrator | 2025-09-19 06:57:43.982154 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 06:57:43.982166 | orchestrator | Friday 19 September 2025 06:57:40 +0000 (0:00:00.146) 0:00:25.284 ****** 2025-09-19 06:57:43.982178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '05a06e17-0162-5722-bf4c-f18a4cab61c7'}})  2025-09-19 06:57:43.982191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'caff573e-485a-5d29-90dc-90eefd21fd68'}})  2025-09-19 06:57:43.982205 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.982217 | orchestrator | 2025-09-19 06:57:43.982230 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 06:57:43.982242 | orchestrator | Friday 19 September 2025 06:57:40 +0000 (0:00:00.154) 0:00:25.438 ****** 2025-09-19 06:57:43.982254 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:57:43.982266 | orchestrator | 2025-09-19 06:57:43.982278 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 06:57:43.982291 | orchestrator | Friday 19 September 2025 06:57:40 +0000 (0:00:00.135) 0:00:25.574 ****** 2025-09-19 06:57:43.982303 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:57:43.982313 | orchestrator | 2025-09-19 06:57:43.982324 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 06:57:43.982335 | orchestrator | Friday 19 September 2025 06:57:40 +0000 (0:00:00.140) 0:00:25.714 ****** 2025-09-19 06:57:43.982345 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.982356 | orchestrator | 2025-09-19 06:57:43.982383 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 06:57:43.982394 | orchestrator | Friday 19 September 2025 06:57:40 +0000 (0:00:00.132) 0:00:25.847 ****** 2025-09-19 06:57:43.982405 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.982415 | orchestrator | 2025-09-19 06:57:43.982426 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 06:57:43.982443 | orchestrator | Friday 19 September 2025 06:57:41 +0000 (0:00:00.322) 0:00:26.170 ****** 2025-09-19 06:57:43.982462 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.982506 | orchestrator | 2025-09-19 06:57:43.982518 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 06:57:43.982529 | orchestrator | Friday 19 September 2025 06:57:41 +0000 (0:00:00.135) 0:00:26.306 ****** 2025-09-19 06:57:43.982539 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 06:57:43.982550 | orchestrator |  "ceph_osd_devices": { 2025-09-19 06:57:43.982561 | orchestrator |  "sdb": { 2025-09-19 06:57:43.982573 | orchestrator |  "osd_lvm_uuid": "05a06e17-0162-5722-bf4c-f18a4cab61c7" 2025-09-19 06:57:43.982583 | orchestrator |  }, 2025-09-19 06:57:43.982595 | orchestrator |  "sdc": { 2025-09-19 06:57:43.982617 | orchestrator |  "osd_lvm_uuid": "caff573e-485a-5d29-90dc-90eefd21fd68" 2025-09-19 06:57:43.982628 | orchestrator |  } 2025-09-19 06:57:43.982638 | orchestrator |  } 2025-09-19 06:57:43.982649 | orchestrator | } 2025-09-19 06:57:43.982660 | orchestrator | 2025-09-19 06:57:43.982671 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 06:57:43.982682 | orchestrator | Friday 19 September 2025 06:57:41 +0000 (0:00:00.126) 0:00:26.432 ****** 2025-09-19 06:57:43.982692 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.982703 | orchestrator | 2025-09-19 06:57:43.982714 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 06:57:43.982724 | orchestrator | Friday 19 September 2025 06:57:41 +0000 (0:00:00.112) 0:00:26.545 ****** 2025-09-19 06:57:43.982735 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.982746 | orchestrator | 2025-09-19 06:57:43.982756 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 06:57:43.982767 | orchestrator | Friday 19 September 2025 06:57:41 +0000 (0:00:00.094) 0:00:26.639 ****** 2025-09-19 06:57:43.982778 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:57:43.982788 | orchestrator | 2025-09-19 06:57:43.982799 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 06:57:43.982810 | orchestrator | Friday 19 September 2025 06:57:41 +0000 (0:00:00.093) 0:00:26.733 ****** 2025-09-19 06:57:43.982820 | orchestrator | changed: [testbed-node-4] => { 2025-09-19 06:57:43.982831 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 06:57:43.982842 | orchestrator |  "ceph_osd_devices": { 2025-09-19 06:57:43.982852 | orchestrator |  "sdb": { 2025-09-19 06:57:43.982863 | orchestrator |  "osd_lvm_uuid": "05a06e17-0162-5722-bf4c-f18a4cab61c7" 2025-09-19 06:57:43.982874 | orchestrator |  }, 2025-09-19 06:57:43.982885 | orchestrator |  "sdc": { 2025-09-19 06:57:43.982896 | orchestrator |  "osd_lvm_uuid": "caff573e-485a-5d29-90dc-90eefd21fd68" 2025-09-19 06:57:43.982907 | orchestrator |  } 2025-09-19 06:57:43.982918 | orchestrator |  }, 2025-09-19 06:57:43.982928 | orchestrator |  "lvm_volumes": [ 2025-09-19 06:57:43.982939 | orchestrator |  { 2025-09-19 06:57:43.982950 | orchestrator |  "data": "osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7", 2025-09-19 06:57:43.982961 | orchestrator |  "data_vg": "ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7" 2025-09-19 06:57:43.982972 | orchestrator |  }, 2025-09-19 06:57:43.982982 | orchestrator |  { 2025-09-19 06:57:43.982993 | orchestrator |  "data": "osd-block-caff573e-485a-5d29-90dc-90eefd21fd68", 2025-09-19 06:57:43.983004 | orchestrator |  "data_vg": "ceph-caff573e-485a-5d29-90dc-90eefd21fd68" 2025-09-19 06:57:43.983015 | orchestrator |  } 2025-09-19 06:57:43.983025 | orchestrator |  ] 2025-09-19 06:57:43.983036 | orchestrator |  } 2025-09-19 06:57:43.983047 | orchestrator | } 2025-09-19 06:57:43.983057 | orchestrator | 2025-09-19 06:57:43.983068 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 06:57:43.983078 | orchestrator | Friday 19 September 2025 06:57:42 +0000 (0:00:00.159) 0:00:26.892 ****** 2025-09-19 06:57:43.983089 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 06:57:43.983100 | orchestrator | 2025-09-19 06:57:43.983110 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 06:57:43.983121 | orchestrator | 2025-09-19 06:57:43.983132 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 06:57:43.983142 | orchestrator | Friday 19 September 2025 06:57:42 +0000 (0:00:00.777) 0:00:27.670 ****** 2025-09-19 06:57:43.983153 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 06:57:43.983164 | orchestrator | 2025-09-19 06:57:43.983174 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 06:57:43.983185 | orchestrator | Friday 19 September 2025 06:57:43 +0000 (0:00:00.323) 0:00:27.993 ****** 2025-09-19 06:57:43.983202 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:57:43.983213 | orchestrator | 2025-09-19 06:57:43.983223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:43.983234 | orchestrator | Friday 19 September 2025 06:57:43 +0000 (0:00:00.509) 0:00:28.503 ****** 2025-09-19 06:57:43.983251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-19 06:57:43.983262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-19 06:57:43.983273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-19 06:57:43.983283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-19 06:57:43.983294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-19 06:57:43.983304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-19 06:57:43.983322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-19 06:57:51.225688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-19 06:57:51.225779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-19 06:57:51.225794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-19 06:57:51.225806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-19 06:57:51.225816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-19 06:57:51.225827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-19 06:57:51.225838 | orchestrator | 2025-09-19 06:57:51.225850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:51.225861 | orchestrator | Friday 19 September 2025 06:57:43 +0000 (0:00:00.343) 0:00:28.846 ****** 2025-09-19 06:57:51.225872 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.225883 | orchestrator | 2025-09-19 06:57:51.225894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:51.225905 | orchestrator | Friday 19 September 2025 06:57:44 +0000 (0:00:00.184) 0:00:29.031 ****** 2025-09-19 06:57:51.225916 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.225927 | orchestrator | 2025-09-19 06:57:51.225937 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:51.225948 | orchestrator | Friday 19 September 2025 06:57:44 +0000 (0:00:00.180) 0:00:29.212 ****** 2025-09-19 06:57:51.225959 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.225969 | orchestrator | 2025-09-19 06:57:51.225980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:51.225991 | orchestrator | Friday 19 September 2025 06:57:44 +0000 (0:00:00.180) 0:00:29.392 ****** 2025-09-19 06:57:51.226001 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.226012 | orchestrator | 2025-09-19 06:57:51.226074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:51.226086 | orchestrator | Friday 19 September 2025 06:57:44 +0000 (0:00:00.209) 0:00:29.601 ****** 2025-09-19 06:57:51.226097 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.226108 | orchestrator | 2025-09-19 06:57:51.226149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:51.226161 | orchestrator | Friday 19 September 2025 06:57:44 +0000 (0:00:00.178) 0:00:29.780 ****** 2025-09-19 06:57:51.226172 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.226183 | orchestrator | 2025-09-19 06:57:51.226194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:51.226205 | orchestrator | Friday 19 September 2025 06:57:45 +0000 (0:00:00.176) 0:00:29.956 ****** 2025-09-19 06:57:51.226216 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.226249 | orchestrator | 2025-09-19 06:57:51.226263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:51.226276 | orchestrator | Friday 19 September 2025 06:57:45 +0000 (0:00:00.163) 0:00:30.119 ****** 2025-09-19 06:57:51.226289 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.226301 | orchestrator | 2025-09-19 06:57:51.226313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:51.226325 | orchestrator | Friday 19 September 2025 06:57:45 +0000 (0:00:00.153) 0:00:30.273 ****** 2025-09-19 06:57:51.226338 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf) 2025-09-19 06:57:51.226352 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf) 2025-09-19 06:57:51.226364 | orchestrator | 2025-09-19 06:57:51.226376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:51.226388 | orchestrator | Friday 19 September 2025 06:57:45 +0000 (0:00:00.566) 0:00:30.839 ****** 2025-09-19 06:57:51.226400 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3567b0e7-c22b-4a61-9c89-3afd695b5400) 2025-09-19 06:57:51.226413 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3567b0e7-c22b-4a61-9c89-3afd695b5400) 2025-09-19 06:57:51.226426 | orchestrator | 2025-09-19 06:57:51.226438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:51.226450 | orchestrator | Friday 19 September 2025 06:57:46 +0000 (0:00:00.678) 0:00:31.517 ****** 2025-09-19 06:57:51.226462 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_60eaf991-1ab4-4753-9c6a-a15ff08d271c) 2025-09-19 06:57:51.226496 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_60eaf991-1ab4-4753-9c6a-a15ff08d271c) 2025-09-19 06:57:51.226509 | orchestrator | 2025-09-19 06:57:51.226521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:51.226534 | orchestrator | Friday 19 September 2025 06:57:47 +0000 (0:00:00.419) 0:00:31.937 ****** 2025-09-19 06:57:51.226546 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_efb009a3-4323-4607-93cb-907bed8bb1e3) 2025-09-19 06:57:51.226559 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_efb009a3-4323-4607-93cb-907bed8bb1e3) 2025-09-19 06:57:51.226572 | orchestrator | 2025-09-19 06:57:51.226585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:57:51.226596 | orchestrator | Friday 19 September 2025 06:57:47 +0000 (0:00:00.434) 0:00:32.371 ****** 2025-09-19 06:57:51.226607 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 06:57:51.226617 | orchestrator | 2025-09-19 06:57:51.226628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:51.226638 | orchestrator | Friday 19 September 2025 06:57:47 +0000 (0:00:00.322) 0:00:32.693 ****** 2025-09-19 06:57:51.226666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-19 06:57:51.226677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-19 06:57:51.226688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-19 06:57:51.226699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-19 06:57:51.226709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-19 06:57:51.226720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-19 06:57:51.226731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-19 06:57:51.226741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-19 06:57:51.226752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-19 06:57:51.226785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-19 06:57:51.226797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-19 06:57:51.226807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-19 06:57:51.226818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-19 06:57:51.226829 | orchestrator | 2025-09-19 06:57:51.226839 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:51.226850 | orchestrator | Friday 19 September 2025 06:57:48 +0000 (0:00:00.337) 0:00:33.031 ****** 2025-09-19 06:57:51.226861 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.226871 | orchestrator | 2025-09-19 06:57:51.226882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:51.226893 | orchestrator | Friday 19 September 2025 06:57:48 +0000 (0:00:00.172) 0:00:33.204 ****** 2025-09-19 06:57:51.226903 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.226914 | orchestrator | 2025-09-19 06:57:51.226925 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:51.226935 | orchestrator | Friday 19 September 2025 06:57:48 +0000 (0:00:00.214) 0:00:33.419 ****** 2025-09-19 06:57:51.226946 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.226957 | orchestrator | 2025-09-19 06:57:51.226972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:51.226983 | orchestrator | Friday 19 September 2025 06:57:48 +0000 (0:00:00.174) 0:00:33.593 ****** 2025-09-19 06:57:51.226993 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.227004 | orchestrator | 2025-09-19 06:57:51.227015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:51.227025 | orchestrator | Friday 19 September 2025 06:57:48 +0000 (0:00:00.146) 0:00:33.740 ****** 2025-09-19 06:57:51.227036 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.227047 | orchestrator | 2025-09-19 06:57:51.227057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:51.227068 | orchestrator | Friday 19 September 2025 06:57:49 +0000 (0:00:00.158) 0:00:33.898 ****** 2025-09-19 06:57:51.227079 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.227089 | orchestrator | 2025-09-19 06:57:51.227100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:51.227111 | orchestrator | Friday 19 September 2025 06:57:49 +0000 (0:00:00.468) 0:00:34.367 ****** 2025-09-19 06:57:51.227121 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.227132 | orchestrator | 2025-09-19 06:57:51.227143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:51.227153 | orchestrator | Friday 19 September 2025 06:57:49 +0000 (0:00:00.183) 0:00:34.550 ****** 2025-09-19 06:57:51.227164 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.227175 | orchestrator | 2025-09-19 06:57:51.227185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:51.227196 | orchestrator | Friday 19 September 2025 06:57:49 +0000 (0:00:00.189) 0:00:34.740 ****** 2025-09-19 06:57:51.227206 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-19 06:57:51.227217 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-19 06:57:51.227228 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-19 06:57:51.227239 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-19 06:57:51.227249 | orchestrator | 2025-09-19 06:57:51.227260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:51.227271 | orchestrator | Friday 19 September 2025 06:57:50 +0000 (0:00:00.617) 0:00:35.357 ****** 2025-09-19 06:57:51.227281 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.227292 | orchestrator | 2025-09-19 06:57:51.227302 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:51.227319 | orchestrator | Friday 19 September 2025 06:57:50 +0000 (0:00:00.181) 0:00:35.538 ****** 2025-09-19 06:57:51.227330 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.227340 | orchestrator | 2025-09-19 06:57:51.227351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:51.227362 | orchestrator | Friday 19 September 2025 06:57:50 +0000 (0:00:00.198) 0:00:35.736 ****** 2025-09-19 06:57:51.227372 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.227383 | orchestrator | 2025-09-19 06:57:51.227394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:57:51.227404 | orchestrator | Friday 19 September 2025 06:57:51 +0000 (0:00:00.180) 0:00:35.917 ****** 2025-09-19 06:57:51.227415 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:51.227425 | orchestrator | 2025-09-19 06:57:51.227436 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 06:57:51.227453 | orchestrator | Friday 19 September 2025 06:57:51 +0000 (0:00:00.171) 0:00:36.088 ****** 2025-09-19 06:57:56.129504 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-19 06:57:56.129592 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-19 06:57:56.129607 | orchestrator | 2025-09-19 06:57:56.129619 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 06:57:56.129630 | orchestrator | Friday 19 September 2025 06:57:51 +0000 (0:00:00.160) 0:00:36.249 ****** 2025-09-19 06:57:56.129641 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:56.129652 | orchestrator | 2025-09-19 06:57:56.129663 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 06:57:56.129674 | orchestrator | Friday 19 September 2025 06:57:51 +0000 (0:00:00.129) 0:00:36.378 ****** 2025-09-19 06:57:56.129684 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:56.129695 | orchestrator | 2025-09-19 06:57:56.129706 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 06:57:56.129717 | orchestrator | Friday 19 September 2025 06:57:51 +0000 (0:00:00.120) 0:00:36.499 ****** 2025-09-19 06:57:56.129727 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:56.129738 | orchestrator | 2025-09-19 06:57:56.129749 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 06:57:56.129759 | orchestrator | Friday 19 September 2025 06:57:51 +0000 (0:00:00.129) 0:00:36.628 ****** 2025-09-19 06:57:56.129770 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:57:56.129781 | orchestrator | 2025-09-19 06:57:56.129792 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 06:57:56.129803 | orchestrator | Friday 19 September 2025 06:57:52 +0000 (0:00:00.379) 0:00:37.007 ****** 2025-09-19 06:57:56.129814 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd4db71fd-07e0-550b-b185-dcfd36a5307b'}}) 2025-09-19 06:57:56.129826 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0c5dfb3-0a46-5f65-b869-b08108365918'}}) 2025-09-19 06:57:56.129836 | orchestrator | 2025-09-19 06:57:56.129847 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 06:57:56.129858 | orchestrator | Friday 19 September 2025 06:57:52 +0000 (0:00:00.186) 0:00:37.194 ****** 2025-09-19 06:57:56.129869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd4db71fd-07e0-550b-b185-dcfd36a5307b'}})  2025-09-19 06:57:56.129881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0c5dfb3-0a46-5f65-b869-b08108365918'}})  2025-09-19 06:57:56.129892 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:56.129902 | orchestrator | 2025-09-19 06:57:56.129913 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 06:57:56.129924 | orchestrator | Friday 19 September 2025 06:57:52 +0000 (0:00:00.165) 0:00:37.359 ****** 2025-09-19 06:57:56.129934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd4db71fd-07e0-550b-b185-dcfd36a5307b'}})  2025-09-19 06:57:56.129969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0c5dfb3-0a46-5f65-b869-b08108365918'}})  2025-09-19 06:57:56.129981 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:56.129991 | orchestrator | 2025-09-19 06:57:56.130002 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 06:57:56.130067 | orchestrator | Friday 19 September 2025 06:57:52 +0000 (0:00:00.212) 0:00:37.572 ****** 2025-09-19 06:57:56.130082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd4db71fd-07e0-550b-b185-dcfd36a5307b'}})  2025-09-19 06:57:56.130094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0c5dfb3-0a46-5f65-b869-b08108365918'}})  2025-09-19 06:57:56.130106 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:56.130116 | orchestrator | 2025-09-19 06:57:56.130127 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 06:57:56.130137 | orchestrator | Friday 19 September 2025 06:57:52 +0000 (0:00:00.182) 0:00:37.755 ****** 2025-09-19 06:57:56.130148 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:57:56.130159 | orchestrator | 2025-09-19 06:57:56.130185 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 06:57:56.130196 | orchestrator | Friday 19 September 2025 06:57:53 +0000 (0:00:00.149) 0:00:37.905 ****** 2025-09-19 06:57:56.130207 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:57:56.130218 | orchestrator | 2025-09-19 06:57:56.130228 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 06:57:56.130239 | orchestrator | Friday 19 September 2025 06:57:53 +0000 (0:00:00.214) 0:00:38.120 ****** 2025-09-19 06:57:56.130249 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:56.130260 | orchestrator | 2025-09-19 06:57:56.130271 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 06:57:56.130282 | orchestrator | Friday 19 September 2025 06:57:53 +0000 (0:00:00.218) 0:00:38.339 ****** 2025-09-19 06:57:56.130292 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:56.130303 | orchestrator | 2025-09-19 06:57:56.130313 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 06:57:56.130324 | orchestrator | Friday 19 September 2025 06:57:53 +0000 (0:00:00.188) 0:00:38.527 ****** 2025-09-19 06:57:56.130334 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:56.130345 | orchestrator | 2025-09-19 06:57:56.130356 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 06:57:56.130367 | orchestrator | Friday 19 September 2025 06:57:53 +0000 (0:00:00.223) 0:00:38.750 ****** 2025-09-19 06:57:56.130377 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 06:57:56.130388 | orchestrator |  "ceph_osd_devices": { 2025-09-19 06:57:56.130399 | orchestrator |  "sdb": { 2025-09-19 06:57:56.130411 | orchestrator |  "osd_lvm_uuid": "d4db71fd-07e0-550b-b185-dcfd36a5307b" 2025-09-19 06:57:56.130438 | orchestrator |  }, 2025-09-19 06:57:56.130450 | orchestrator |  "sdc": { 2025-09-19 06:57:56.130460 | orchestrator |  "osd_lvm_uuid": "a0c5dfb3-0a46-5f65-b869-b08108365918" 2025-09-19 06:57:56.130498 | orchestrator |  } 2025-09-19 06:57:56.130509 | orchestrator |  } 2025-09-19 06:57:56.130521 | orchestrator | } 2025-09-19 06:57:56.130532 | orchestrator | 2025-09-19 06:57:56.130543 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 06:57:56.130554 | orchestrator | Friday 19 September 2025 06:57:54 +0000 (0:00:00.231) 0:00:38.982 ****** 2025-09-19 06:57:56.130565 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:56.130576 | orchestrator | 2025-09-19 06:57:56.130587 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 06:57:56.130598 | orchestrator | Friday 19 September 2025 06:57:54 +0000 (0:00:00.159) 0:00:39.142 ****** 2025-09-19 06:57:56.130608 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:56.130619 | orchestrator | 2025-09-19 06:57:56.130630 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 06:57:56.130650 | orchestrator | Friday 19 September 2025 06:57:54 +0000 (0:00:00.418) 0:00:39.561 ****** 2025-09-19 06:57:56.130661 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:57:56.130671 | orchestrator | 2025-09-19 06:57:56.130682 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 06:57:56.130693 | orchestrator | Friday 19 September 2025 06:57:54 +0000 (0:00:00.144) 0:00:39.706 ****** 2025-09-19 06:57:56.130704 | orchestrator | changed: [testbed-node-5] => { 2025-09-19 06:57:56.130715 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 06:57:56.130727 | orchestrator |  "ceph_osd_devices": { 2025-09-19 06:57:56.130746 | orchestrator |  "sdb": { 2025-09-19 06:57:56.130758 | orchestrator |  "osd_lvm_uuid": "d4db71fd-07e0-550b-b185-dcfd36a5307b" 2025-09-19 06:57:56.130769 | orchestrator |  }, 2025-09-19 06:57:56.130780 | orchestrator |  "sdc": { 2025-09-19 06:57:56.130791 | orchestrator |  "osd_lvm_uuid": "a0c5dfb3-0a46-5f65-b869-b08108365918" 2025-09-19 06:57:56.130802 | orchestrator |  } 2025-09-19 06:57:56.130813 | orchestrator |  }, 2025-09-19 06:57:56.130824 | orchestrator |  "lvm_volumes": [ 2025-09-19 06:57:56.130835 | orchestrator |  { 2025-09-19 06:57:56.130846 | orchestrator |  "data": "osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b", 2025-09-19 06:57:56.130857 | orchestrator |  "data_vg": "ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b" 2025-09-19 06:57:56.130868 | orchestrator |  }, 2025-09-19 06:57:56.130879 | orchestrator |  { 2025-09-19 06:57:56.130890 | orchestrator |  "data": "osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918", 2025-09-19 06:57:56.130901 | orchestrator |  "data_vg": "ceph-a0c5dfb3-0a46-5f65-b869-b08108365918" 2025-09-19 06:57:56.130912 | orchestrator |  } 2025-09-19 06:57:56.130923 | orchestrator |  ] 2025-09-19 06:57:56.130934 | orchestrator |  } 2025-09-19 06:57:56.130949 | orchestrator | } 2025-09-19 06:57:56.130960 | orchestrator | 2025-09-19 06:57:56.130971 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 06:57:56.130982 | orchestrator | Friday 19 September 2025 06:57:55 +0000 (0:00:00.266) 0:00:39.973 ****** 2025-09-19 06:57:56.130992 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 06:57:56.131003 | orchestrator | 2025-09-19 06:57:56.131014 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 06:57:56.131025 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 06:57:56.131037 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 06:57:56.131048 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 06:57:56.131059 | orchestrator | 2025-09-19 06:57:56.131070 | orchestrator | 2025-09-19 06:57:56.131080 | orchestrator | 2025-09-19 06:57:56.131091 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 06:57:56.131102 | orchestrator | Friday 19 September 2025 06:57:56 +0000 (0:00:01.013) 0:00:40.986 ****** 2025-09-19 06:57:56.131113 | orchestrator | =============================================================================== 2025-09-19 06:57:56.131124 | orchestrator | Write configuration file ------------------------------------------------ 3.95s 2025-09-19 06:57:56.131134 | orchestrator | Add known links to the list of available block devices ------------------ 1.11s 2025-09-19 06:57:56.131145 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2025-09-19 06:57:56.131156 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2025-09-19 06:57:56.131167 | orchestrator | Get initial list of available block devices ----------------------------- 0.97s 2025-09-19 06:57:56.131184 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.83s 2025-09-19 06:57:56.131194 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2025-09-19 06:57:56.131205 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-09-19 06:57:56.131216 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.71s 2025-09-19 06:57:56.131227 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.70s 2025-09-19 06:57:56.131238 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-09-19 06:57:56.131248 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-09-19 06:57:56.131259 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-09-19 06:57:56.131270 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.65s 2025-09-19 06:57:56.131288 | orchestrator | Set WAL devices config data --------------------------------------------- 0.65s 2025-09-19 06:57:56.361725 | orchestrator | Print DB devices -------------------------------------------------------- 0.64s 2025-09-19 06:57:56.361809 | orchestrator | Print configuration data ------------------------------------------------ 0.63s 2025-09-19 06:57:56.361823 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-09-19 06:57:56.361835 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-19 06:57:56.361846 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-19 06:58:18.774842 | orchestrator | 2025-09-19 06:58:18 | INFO  | Task f9f54959-c888-42e9-8abc-16a034f1f6f7 (sync inventory) is running in background. Output coming soon. 2025-09-19 06:58:44.287112 | orchestrator | 2025-09-19 06:58:19 | INFO  | Starting group_vars file reorganization 2025-09-19 06:58:44.287196 | orchestrator | 2025-09-19 06:58:19 | INFO  | Moved 0 file(s) to their respective directories 2025-09-19 06:58:44.287210 | orchestrator | 2025-09-19 06:58:19 | INFO  | Group_vars file reorganization completed 2025-09-19 06:58:44.287220 | orchestrator | 2025-09-19 06:58:22 | INFO  | Starting variable preparation from inventory 2025-09-19 06:58:44.287231 | orchestrator | 2025-09-19 06:58:25 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-19 06:58:44.287240 | orchestrator | 2025-09-19 06:58:25 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-19 06:58:44.287250 | orchestrator | 2025-09-19 06:58:25 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-19 06:58:44.287260 | orchestrator | 2025-09-19 06:58:25 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-19 06:58:44.287269 | orchestrator | 2025-09-19 06:58:25 | INFO  | Variable preparation completed 2025-09-19 06:58:44.287279 | orchestrator | 2025-09-19 06:58:27 | INFO  | Starting inventory overwrite handling 2025-09-19 06:58:44.287289 | orchestrator | 2025-09-19 06:58:27 | INFO  | Handling group overwrites in 99-overwrite 2025-09-19 06:58:44.287319 | orchestrator | 2025-09-19 06:58:27 | INFO  | Removing group frr:children from 60-generic 2025-09-19 06:58:44.287329 | orchestrator | 2025-09-19 06:58:27 | INFO  | Removing group storage:children from 50-kolla 2025-09-19 06:58:44.287339 | orchestrator | 2025-09-19 06:58:27 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-19 06:58:44.287349 | orchestrator | 2025-09-19 06:58:27 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-19 06:58:44.287359 | orchestrator | 2025-09-19 06:58:27 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-19 06:58:44.287368 | orchestrator | 2025-09-19 06:58:27 | INFO  | Handling group overwrites in 20-roles 2025-09-19 06:58:44.287378 | orchestrator | 2025-09-19 06:58:27 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-19 06:58:44.287407 | orchestrator | 2025-09-19 06:58:27 | INFO  | Removed 6 group(s) in total 2025-09-19 06:58:44.287418 | orchestrator | 2025-09-19 06:58:27 | INFO  | Inventory overwrite handling completed 2025-09-19 06:58:44.287427 | orchestrator | 2025-09-19 06:58:28 | INFO  | Starting merge of inventory files 2025-09-19 06:58:44.287437 | orchestrator | 2025-09-19 06:58:28 | INFO  | Inventory files merged successfully 2025-09-19 06:58:44.287482 | orchestrator | 2025-09-19 06:58:33 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-19 06:58:44.287497 | orchestrator | 2025-09-19 06:58:43 | INFO  | Successfully wrote ClusterShell configuration 2025-09-19 06:58:44.287507 | orchestrator | [master 952aa2a] 2025-09-19-06-58 2025-09-19 06:58:44.287518 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-19 06:58:46.481524 | orchestrator | 2025-09-19 06:58:46 | INFO  | Task 6f9640e1-dd13-4d15-beb4-8678f90e5d61 (ceph-create-lvm-devices) was prepared for execution. 2025-09-19 06:58:46.481629 | orchestrator | 2025-09-19 06:58:46 | INFO  | It takes a moment until task 6f9640e1-dd13-4d15-beb4-8678f90e5d61 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-19 06:58:57.979377 | orchestrator | 2025-09-19 06:58:57.979539 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 06:58:57.979558 | orchestrator | 2025-09-19 06:58:57.979570 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 06:58:57.979582 | orchestrator | Friday 19 September 2025 06:58:50 +0000 (0:00:00.308) 0:00:00.308 ****** 2025-09-19 06:58:57.979594 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 06:58:57.979606 | orchestrator | 2025-09-19 06:58:57.979617 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 06:58:57.979628 | orchestrator | Friday 19 September 2025 06:58:50 +0000 (0:00:00.279) 0:00:00.588 ****** 2025-09-19 06:58:57.979639 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:58:57.979651 | orchestrator | 2025-09-19 06:58:57.979662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:57.979673 | orchestrator | Friday 19 September 2025 06:58:51 +0000 (0:00:00.229) 0:00:00.817 ****** 2025-09-19 06:58:57.979684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-19 06:58:57.979712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-19 06:58:57.979733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-19 06:58:57.979744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-19 06:58:57.979755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-19 06:58:57.979766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-19 06:58:57.979777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-19 06:58:57.979787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-19 06:58:57.979798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-19 06:58:57.979810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-19 06:58:57.979820 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-19 06:58:57.979831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-19 06:58:57.979842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-19 06:58:57.979853 | orchestrator | 2025-09-19 06:58:57.979864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:57.979897 | orchestrator | Friday 19 September 2025 06:58:51 +0000 (0:00:00.400) 0:00:01.218 ****** 2025-09-19 06:58:57.979909 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.979923 | orchestrator | 2025-09-19 06:58:57.979936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:57.979949 | orchestrator | Friday 19 September 2025 06:58:52 +0000 (0:00:00.458) 0:00:01.677 ****** 2025-09-19 06:58:57.979961 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.979973 | orchestrator | 2025-09-19 06:58:57.979986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:57.979999 | orchestrator | Friday 19 September 2025 06:58:52 +0000 (0:00:00.199) 0:00:01.876 ****** 2025-09-19 06:58:57.980011 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.980023 | orchestrator | 2025-09-19 06:58:57.980036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:57.980048 | orchestrator | Friday 19 September 2025 06:58:52 +0000 (0:00:00.210) 0:00:02.086 ****** 2025-09-19 06:58:57.980060 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.980072 | orchestrator | 2025-09-19 06:58:57.980084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:57.980095 | orchestrator | Friday 19 September 2025 06:58:52 +0000 (0:00:00.205) 0:00:02.292 ****** 2025-09-19 06:58:57.980106 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.980117 | orchestrator | 2025-09-19 06:58:57.980128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:57.980138 | orchestrator | Friday 19 September 2025 06:58:52 +0000 (0:00:00.204) 0:00:02.497 ****** 2025-09-19 06:58:57.980149 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.980160 | orchestrator | 2025-09-19 06:58:57.980171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:57.980182 | orchestrator | Friday 19 September 2025 06:58:53 +0000 (0:00:00.212) 0:00:02.709 ****** 2025-09-19 06:58:57.980192 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.980203 | orchestrator | 2025-09-19 06:58:57.980214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:57.980225 | orchestrator | Friday 19 September 2025 06:58:53 +0000 (0:00:00.195) 0:00:02.904 ****** 2025-09-19 06:58:57.980236 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.980247 | orchestrator | 2025-09-19 06:58:57.980257 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:57.980268 | orchestrator | Friday 19 September 2025 06:58:53 +0000 (0:00:00.189) 0:00:03.094 ****** 2025-09-19 06:58:57.980279 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37) 2025-09-19 06:58:57.980290 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37) 2025-09-19 06:58:57.980301 | orchestrator | 2025-09-19 06:58:57.980312 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:57.980323 | orchestrator | Friday 19 September 2025 06:58:53 +0000 (0:00:00.397) 0:00:03.492 ****** 2025-09-19 06:58:57.980354 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4dd49722-42e6-4e94-9106-a95d5116fdb0) 2025-09-19 06:58:57.980366 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4dd49722-42e6-4e94-9106-a95d5116fdb0) 2025-09-19 06:58:57.980377 | orchestrator | 2025-09-19 06:58:57.980387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:57.980398 | orchestrator | Friday 19 September 2025 06:58:54 +0000 (0:00:00.462) 0:00:03.955 ****** 2025-09-19 06:58:57.980409 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1cf24504-b3f3-4e87-bda4-4a150d83b5cd) 2025-09-19 06:58:57.980419 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1cf24504-b3f3-4e87-bda4-4a150d83b5cd) 2025-09-19 06:58:57.980430 | orchestrator | 2025-09-19 06:58:57.980458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:57.980478 | orchestrator | Friday 19 September 2025 06:58:54 +0000 (0:00:00.532) 0:00:04.488 ****** 2025-09-19 06:58:57.980489 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5b11ce89-f193-4587-acb9-80845fc85b80) 2025-09-19 06:58:57.980499 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5b11ce89-f193-4587-acb9-80845fc85b80) 2025-09-19 06:58:57.980510 | orchestrator | 2025-09-19 06:58:57.980521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:58:57.980531 | orchestrator | Friday 19 September 2025 06:58:55 +0000 (0:00:00.722) 0:00:05.211 ****** 2025-09-19 06:58:57.980542 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 06:58:57.980552 | orchestrator | 2025-09-19 06:58:57.980563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:57.980574 | orchestrator | Friday 19 September 2025 06:58:55 +0000 (0:00:00.318) 0:00:05.529 ****** 2025-09-19 06:58:57.980584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-19 06:58:57.980595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-19 06:58:57.980605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-19 06:58:57.980616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-19 06:58:57.980626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-19 06:58:57.980637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-19 06:58:57.980648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-19 06:58:57.980658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-19 06:58:57.980688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-19 06:58:57.980699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-19 06:58:57.980710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-19 06:58:57.980720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-19 06:58:57.980735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-19 06:58:57.980746 | orchestrator | 2025-09-19 06:58:57.980757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:57.980768 | orchestrator | Friday 19 September 2025 06:58:56 +0000 (0:00:00.388) 0:00:05.917 ****** 2025-09-19 06:58:57.980778 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.980789 | orchestrator | 2025-09-19 06:58:57.980800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:57.980810 | orchestrator | Friday 19 September 2025 06:58:56 +0000 (0:00:00.186) 0:00:06.103 ****** 2025-09-19 06:58:57.980821 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.980831 | orchestrator | 2025-09-19 06:58:57.980842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:57.980852 | orchestrator | Friday 19 September 2025 06:58:56 +0000 (0:00:00.206) 0:00:06.310 ****** 2025-09-19 06:58:57.980863 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.980873 | orchestrator | 2025-09-19 06:58:57.980884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:57.980895 | orchestrator | Friday 19 September 2025 06:58:56 +0000 (0:00:00.223) 0:00:06.533 ****** 2025-09-19 06:58:57.980905 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.980916 | orchestrator | 2025-09-19 06:58:57.980926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:57.980943 | orchestrator | Friday 19 September 2025 06:58:57 +0000 (0:00:00.218) 0:00:06.752 ****** 2025-09-19 06:58:57.980954 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.980964 | orchestrator | 2025-09-19 06:58:57.980975 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:57.980986 | orchestrator | Friday 19 September 2025 06:58:57 +0000 (0:00:00.221) 0:00:06.974 ****** 2025-09-19 06:58:57.980996 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.981007 | orchestrator | 2025-09-19 06:58:57.981017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:57.981028 | orchestrator | Friday 19 September 2025 06:58:57 +0000 (0:00:00.210) 0:00:07.185 ****** 2025-09-19 06:58:57.981038 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:58:57.981049 | orchestrator | 2025-09-19 06:58:57.981059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:58:57.981070 | orchestrator | Friday 19 September 2025 06:58:57 +0000 (0:00:00.225) 0:00:07.411 ****** 2025-09-19 06:58:57.981087 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.726307 | orchestrator | 2025-09-19 06:59:06.726489 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:06.726510 | orchestrator | Friday 19 September 2025 06:58:57 +0000 (0:00:00.219) 0:00:07.630 ****** 2025-09-19 06:59:06.726522 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-19 06:59:06.726534 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-19 06:59:06.726546 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-19 06:59:06.726557 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-19 06:59:06.726568 | orchestrator | 2025-09-19 06:59:06.726579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:06.726591 | orchestrator | Friday 19 September 2025 06:58:59 +0000 (0:00:01.178) 0:00:08.809 ****** 2025-09-19 06:59:06.726601 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.726612 | orchestrator | 2025-09-19 06:59:06.726623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:06.726634 | orchestrator | Friday 19 September 2025 06:58:59 +0000 (0:00:00.200) 0:00:09.009 ****** 2025-09-19 06:59:06.726645 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.726656 | orchestrator | 2025-09-19 06:59:06.726666 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:06.726677 | orchestrator | Friday 19 September 2025 06:58:59 +0000 (0:00:00.206) 0:00:09.216 ****** 2025-09-19 06:59:06.726688 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.726699 | orchestrator | 2025-09-19 06:59:06.726710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:06.726722 | orchestrator | Friday 19 September 2025 06:58:59 +0000 (0:00:00.244) 0:00:09.461 ****** 2025-09-19 06:59:06.726732 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.726743 | orchestrator | 2025-09-19 06:59:06.726754 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 06:59:06.726765 | orchestrator | Friday 19 September 2025 06:58:59 +0000 (0:00:00.193) 0:00:09.654 ****** 2025-09-19 06:59:06.726776 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.726786 | orchestrator | 2025-09-19 06:59:06.726797 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 06:59:06.726808 | orchestrator | Friday 19 September 2025 06:59:00 +0000 (0:00:00.161) 0:00:09.816 ****** 2025-09-19 06:59:06.726820 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'deb73447-54c2-58c6-89f8-2e63b50c59b2'}}) 2025-09-19 06:59:06.726831 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'}}) 2025-09-19 06:59:06.726844 | orchestrator | 2025-09-19 06:59:06.726858 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 06:59:06.726871 | orchestrator | Friday 19 September 2025 06:59:00 +0000 (0:00:00.208) 0:00:10.025 ****** 2025-09-19 06:59:06.726885 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'}) 2025-09-19 06:59:06.726920 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'}) 2025-09-19 06:59:06.726933 | orchestrator | 2025-09-19 06:59:06.726946 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 06:59:06.726973 | orchestrator | Friday 19 September 2025 06:59:02 +0000 (0:00:02.125) 0:00:12.151 ****** 2025-09-19 06:59:06.726987 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:06.727002 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:06.727015 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.727028 | orchestrator | 2025-09-19 06:59:06.727040 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 06:59:06.727053 | orchestrator | Friday 19 September 2025 06:59:02 +0000 (0:00:00.206) 0:00:12.358 ****** 2025-09-19 06:59:06.727065 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'}) 2025-09-19 06:59:06.727078 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'}) 2025-09-19 06:59:06.727090 | orchestrator | 2025-09-19 06:59:06.727103 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 06:59:06.727116 | orchestrator | Friday 19 September 2025 06:59:04 +0000 (0:00:01.477) 0:00:13.835 ****** 2025-09-19 06:59:06.727129 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:06.727142 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:06.727155 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.727168 | orchestrator | 2025-09-19 06:59:06.727180 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 06:59:06.727193 | orchestrator | Friday 19 September 2025 06:59:04 +0000 (0:00:00.144) 0:00:13.980 ****** 2025-09-19 06:59:06.727207 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.727219 | orchestrator | 2025-09-19 06:59:06.727230 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 06:59:06.727259 | orchestrator | Friday 19 September 2025 06:59:04 +0000 (0:00:00.178) 0:00:14.159 ****** 2025-09-19 06:59:06.727271 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:06.727282 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:06.727293 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.727304 | orchestrator | 2025-09-19 06:59:06.727314 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 06:59:06.727325 | orchestrator | Friday 19 September 2025 06:59:04 +0000 (0:00:00.439) 0:00:14.599 ****** 2025-09-19 06:59:06.727336 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.727346 | orchestrator | 2025-09-19 06:59:06.727357 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 06:59:06.727368 | orchestrator | Friday 19 September 2025 06:59:05 +0000 (0:00:00.159) 0:00:14.759 ****** 2025-09-19 06:59:06.727379 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:06.727399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:06.727410 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.727500 | orchestrator | 2025-09-19 06:59:06.727515 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 06:59:06.727526 | orchestrator | Friday 19 September 2025 06:59:05 +0000 (0:00:00.183) 0:00:14.943 ****** 2025-09-19 06:59:06.727537 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.727547 | orchestrator | 2025-09-19 06:59:06.727558 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 06:59:06.727569 | orchestrator | Friday 19 September 2025 06:59:05 +0000 (0:00:00.170) 0:00:15.113 ****** 2025-09-19 06:59:06.727580 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:06.727591 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:06.727602 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.727612 | orchestrator | 2025-09-19 06:59:06.727623 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 06:59:06.727634 | orchestrator | Friday 19 September 2025 06:59:05 +0000 (0:00:00.182) 0:00:15.295 ****** 2025-09-19 06:59:06.727644 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:59:06.727655 | orchestrator | 2025-09-19 06:59:06.727666 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 06:59:06.727677 | orchestrator | Friday 19 September 2025 06:59:05 +0000 (0:00:00.169) 0:00:15.465 ****** 2025-09-19 06:59:06.727688 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:06.727699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:06.727710 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.727720 | orchestrator | 2025-09-19 06:59:06.727731 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 06:59:06.727750 | orchestrator | Friday 19 September 2025 06:59:05 +0000 (0:00:00.186) 0:00:15.651 ****** 2025-09-19 06:59:06.727762 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:06.727772 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:06.727783 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.727794 | orchestrator | 2025-09-19 06:59:06.727804 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 06:59:06.727815 | orchestrator | Friday 19 September 2025 06:59:06 +0000 (0:00:00.196) 0:00:15.848 ****** 2025-09-19 06:59:06.727826 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:06.727837 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:06.727848 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.727858 | orchestrator | 2025-09-19 06:59:06.727869 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 06:59:06.727880 | orchestrator | Friday 19 September 2025 06:59:06 +0000 (0:00:00.175) 0:00:16.024 ****** 2025-09-19 06:59:06.727890 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.727908 | orchestrator | 2025-09-19 06:59:06.727919 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 06:59:06.727930 | orchestrator | Friday 19 September 2025 06:59:06 +0000 (0:00:00.164) 0:00:16.189 ****** 2025-09-19 06:59:06.727941 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:06.727952 | orchestrator | 2025-09-19 06:59:06.727969 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 06:59:13.542790 | orchestrator | Friday 19 September 2025 06:59:06 +0000 (0:00:00.187) 0:00:16.376 ****** 2025-09-19 06:59:13.542899 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.542916 | orchestrator | 2025-09-19 06:59:13.542929 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 06:59:13.542941 | orchestrator | Friday 19 September 2025 06:59:06 +0000 (0:00:00.160) 0:00:16.536 ****** 2025-09-19 06:59:13.542952 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 06:59:13.542964 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 06:59:13.542975 | orchestrator | } 2025-09-19 06:59:13.542986 | orchestrator | 2025-09-19 06:59:13.542998 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 06:59:13.543009 | orchestrator | Friday 19 September 2025 06:59:07 +0000 (0:00:00.372) 0:00:16.909 ****** 2025-09-19 06:59:13.543020 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 06:59:13.543031 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 06:59:13.543042 | orchestrator | } 2025-09-19 06:59:13.543053 | orchestrator | 2025-09-19 06:59:13.543065 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 06:59:13.543076 | orchestrator | Friday 19 September 2025 06:59:07 +0000 (0:00:00.181) 0:00:17.090 ****** 2025-09-19 06:59:13.543087 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 06:59:13.543098 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 06:59:13.543110 | orchestrator | } 2025-09-19 06:59:13.543122 | orchestrator | 2025-09-19 06:59:13.543133 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 06:59:13.543144 | orchestrator | Friday 19 September 2025 06:59:07 +0000 (0:00:00.166) 0:00:17.256 ****** 2025-09-19 06:59:13.543155 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:59:13.543166 | orchestrator | 2025-09-19 06:59:13.543185 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 06:59:13.543211 | orchestrator | Friday 19 September 2025 06:59:08 +0000 (0:00:00.660) 0:00:17.917 ****** 2025-09-19 06:59:13.543234 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:59:13.543253 | orchestrator | 2025-09-19 06:59:13.543272 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 06:59:13.543292 | orchestrator | Friday 19 September 2025 06:59:08 +0000 (0:00:00.522) 0:00:18.440 ****** 2025-09-19 06:59:13.543310 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:59:13.543327 | orchestrator | 2025-09-19 06:59:13.543341 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 06:59:13.543353 | orchestrator | Friday 19 September 2025 06:59:09 +0000 (0:00:00.517) 0:00:18.957 ****** 2025-09-19 06:59:13.543366 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:59:13.543378 | orchestrator | 2025-09-19 06:59:13.543391 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 06:59:13.543403 | orchestrator | Friday 19 September 2025 06:59:09 +0000 (0:00:00.170) 0:00:19.128 ****** 2025-09-19 06:59:13.543415 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.543427 | orchestrator | 2025-09-19 06:59:13.543470 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 06:59:13.543483 | orchestrator | Friday 19 September 2025 06:59:09 +0000 (0:00:00.116) 0:00:19.244 ****** 2025-09-19 06:59:13.543495 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.543507 | orchestrator | 2025-09-19 06:59:13.543520 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 06:59:13.543532 | orchestrator | Friday 19 September 2025 06:59:09 +0000 (0:00:00.148) 0:00:19.393 ****** 2025-09-19 06:59:13.543571 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 06:59:13.543585 | orchestrator |  "vgs_report": { 2025-09-19 06:59:13.543611 | orchestrator |  "vg": [] 2025-09-19 06:59:13.543625 | orchestrator |  } 2025-09-19 06:59:13.543637 | orchestrator | } 2025-09-19 06:59:13.543649 | orchestrator | 2025-09-19 06:59:13.543662 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 06:59:13.543674 | orchestrator | Friday 19 September 2025 06:59:09 +0000 (0:00:00.154) 0:00:19.547 ****** 2025-09-19 06:59:13.543685 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.543695 | orchestrator | 2025-09-19 06:59:13.543706 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 06:59:13.543716 | orchestrator | Friday 19 September 2025 06:59:10 +0000 (0:00:00.146) 0:00:19.693 ****** 2025-09-19 06:59:13.543727 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.543738 | orchestrator | 2025-09-19 06:59:13.543748 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 06:59:13.543759 | orchestrator | Friday 19 September 2025 06:59:10 +0000 (0:00:00.126) 0:00:19.820 ****** 2025-09-19 06:59:13.543769 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.543780 | orchestrator | 2025-09-19 06:59:13.543790 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 06:59:13.543801 | orchestrator | Friday 19 September 2025 06:59:10 +0000 (0:00:00.351) 0:00:20.172 ****** 2025-09-19 06:59:13.543812 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.543822 | orchestrator | 2025-09-19 06:59:13.543833 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 06:59:13.543844 | orchestrator | Friday 19 September 2025 06:59:10 +0000 (0:00:00.149) 0:00:20.322 ****** 2025-09-19 06:59:13.543854 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.543865 | orchestrator | 2025-09-19 06:59:13.543875 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 06:59:13.543886 | orchestrator | Friday 19 September 2025 06:59:10 +0000 (0:00:00.166) 0:00:20.488 ****** 2025-09-19 06:59:13.543897 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.543907 | orchestrator | 2025-09-19 06:59:13.543918 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 06:59:13.543929 | orchestrator | Friday 19 September 2025 06:59:10 +0000 (0:00:00.149) 0:00:20.637 ****** 2025-09-19 06:59:13.543939 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.543950 | orchestrator | 2025-09-19 06:59:13.543960 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 06:59:13.543971 | orchestrator | Friday 19 September 2025 06:59:11 +0000 (0:00:00.173) 0:00:20.811 ****** 2025-09-19 06:59:13.543982 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.543992 | orchestrator | 2025-09-19 06:59:13.544003 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 06:59:13.544032 | orchestrator | Friday 19 September 2025 06:59:11 +0000 (0:00:00.179) 0:00:20.990 ****** 2025-09-19 06:59:13.544044 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.544055 | orchestrator | 2025-09-19 06:59:13.544066 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 06:59:13.544076 | orchestrator | Friday 19 September 2025 06:59:11 +0000 (0:00:00.200) 0:00:21.191 ****** 2025-09-19 06:59:13.544087 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.544098 | orchestrator | 2025-09-19 06:59:13.544108 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 06:59:13.544119 | orchestrator | Friday 19 September 2025 06:59:11 +0000 (0:00:00.164) 0:00:21.355 ****** 2025-09-19 06:59:13.544130 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.544140 | orchestrator | 2025-09-19 06:59:13.544151 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 06:59:13.544162 | orchestrator | Friday 19 September 2025 06:59:11 +0000 (0:00:00.131) 0:00:21.487 ****** 2025-09-19 06:59:13.544177 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.544196 | orchestrator | 2025-09-19 06:59:13.544228 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 06:59:13.544247 | orchestrator | Friday 19 September 2025 06:59:11 +0000 (0:00:00.128) 0:00:21.616 ****** 2025-09-19 06:59:13.544264 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.544275 | orchestrator | 2025-09-19 06:59:13.544286 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 06:59:13.544297 | orchestrator | Friday 19 September 2025 06:59:12 +0000 (0:00:00.131) 0:00:21.748 ****** 2025-09-19 06:59:13.544307 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.544318 | orchestrator | 2025-09-19 06:59:13.544329 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 06:59:13.544339 | orchestrator | Friday 19 September 2025 06:59:12 +0000 (0:00:00.166) 0:00:21.914 ****** 2025-09-19 06:59:13.544351 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:13.544364 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:13.544375 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.544386 | orchestrator | 2025-09-19 06:59:13.544396 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 06:59:13.544407 | orchestrator | Friday 19 September 2025 06:59:12 +0000 (0:00:00.380) 0:00:22.294 ****** 2025-09-19 06:59:13.544417 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:13.544428 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:13.544478 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.544489 | orchestrator | 2025-09-19 06:59:13.544500 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 06:59:13.544511 | orchestrator | Friday 19 September 2025 06:59:12 +0000 (0:00:00.159) 0:00:22.454 ****** 2025-09-19 06:59:13.544522 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:13.544533 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:13.544544 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.544555 | orchestrator | 2025-09-19 06:59:13.544565 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 06:59:13.544576 | orchestrator | Friday 19 September 2025 06:59:13 +0000 (0:00:00.216) 0:00:22.671 ****** 2025-09-19 06:59:13.544587 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:13.544598 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:13.544608 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.544619 | orchestrator | 2025-09-19 06:59:13.544630 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 06:59:13.544640 | orchestrator | Friday 19 September 2025 06:59:13 +0000 (0:00:00.189) 0:00:22.860 ****** 2025-09-19 06:59:13.544651 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:13.544662 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:13.544673 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:13.544698 | orchestrator | 2025-09-19 06:59:13.544717 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 06:59:13.544729 | orchestrator | Friday 19 September 2025 06:59:13 +0000 (0:00:00.172) 0:00:23.033 ****** 2025-09-19 06:59:13.544739 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:13.544759 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:19.160656 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:19.160764 | orchestrator | 2025-09-19 06:59:19.160781 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 06:59:19.160794 | orchestrator | Friday 19 September 2025 06:59:13 +0000 (0:00:00.160) 0:00:23.193 ****** 2025-09-19 06:59:19.160826 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:19.160840 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:19.160851 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:19.160862 | orchestrator | 2025-09-19 06:59:19.160873 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 06:59:19.160884 | orchestrator | Friday 19 September 2025 06:59:13 +0000 (0:00:00.155) 0:00:23.349 ****** 2025-09-19 06:59:19.160895 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:19.160910 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:19.160928 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:19.160946 | orchestrator | 2025-09-19 06:59:19.160963 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 06:59:19.160981 | orchestrator | Friday 19 September 2025 06:59:13 +0000 (0:00:00.160) 0:00:23.510 ****** 2025-09-19 06:59:19.160999 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:59:19.161017 | orchestrator | 2025-09-19 06:59:19.161037 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 06:59:19.161055 | orchestrator | Friday 19 September 2025 06:59:14 +0000 (0:00:00.560) 0:00:24.070 ****** 2025-09-19 06:59:19.161075 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:59:19.161090 | orchestrator | 2025-09-19 06:59:19.161101 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 06:59:19.161111 | orchestrator | Friday 19 September 2025 06:59:14 +0000 (0:00:00.542) 0:00:24.613 ****** 2025-09-19 06:59:19.161122 | orchestrator | ok: [testbed-node-3] 2025-09-19 06:59:19.161133 | orchestrator | 2025-09-19 06:59:19.161144 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 06:59:19.161154 | orchestrator | Friday 19 September 2025 06:59:15 +0000 (0:00:00.146) 0:00:24.760 ****** 2025-09-19 06:59:19.161165 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'vg_name': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'}) 2025-09-19 06:59:19.161177 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'vg_name': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'}) 2025-09-19 06:59:19.161188 | orchestrator | 2025-09-19 06:59:19.161205 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 06:59:19.161216 | orchestrator | Friday 19 September 2025 06:59:15 +0000 (0:00:00.161) 0:00:24.922 ****** 2025-09-19 06:59:19.161227 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:19.161266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:19.161278 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:19.161288 | orchestrator | 2025-09-19 06:59:19.161299 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 06:59:19.161310 | orchestrator | Friday 19 September 2025 06:59:15 +0000 (0:00:00.370) 0:00:25.292 ****** 2025-09-19 06:59:19.161321 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:19.161331 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:19.161342 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:19.161353 | orchestrator | 2025-09-19 06:59:19.161363 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 06:59:19.161374 | orchestrator | Friday 19 September 2025 06:59:15 +0000 (0:00:00.164) 0:00:25.457 ****** 2025-09-19 06:59:19.161387 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'})  2025-09-19 06:59:19.161407 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'})  2025-09-19 06:59:19.161427 | orchestrator | skipping: [testbed-node-3] 2025-09-19 06:59:19.161478 | orchestrator | 2025-09-19 06:59:19.161490 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 06:59:19.161500 | orchestrator | Friday 19 September 2025 06:59:15 +0000 (0:00:00.164) 0:00:25.622 ****** 2025-09-19 06:59:19.161511 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 06:59:19.161522 | orchestrator |  "lvm_report": { 2025-09-19 06:59:19.161534 | orchestrator |  "lv": [ 2025-09-19 06:59:19.161545 | orchestrator |  { 2025-09-19 06:59:19.161575 | orchestrator |  "lv_name": "osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1", 2025-09-19 06:59:19.161587 | orchestrator |  "vg_name": "ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1" 2025-09-19 06:59:19.161598 | orchestrator |  }, 2025-09-19 06:59:19.161609 | orchestrator |  { 2025-09-19 06:59:19.161620 | orchestrator |  "lv_name": "osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2", 2025-09-19 06:59:19.161631 | orchestrator |  "vg_name": "ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2" 2025-09-19 06:59:19.161642 | orchestrator |  } 2025-09-19 06:59:19.161652 | orchestrator |  ], 2025-09-19 06:59:19.161663 | orchestrator |  "pv": [ 2025-09-19 06:59:19.161674 | orchestrator |  { 2025-09-19 06:59:19.161685 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 06:59:19.161696 | orchestrator |  "vg_name": "ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2" 2025-09-19 06:59:19.161706 | orchestrator |  }, 2025-09-19 06:59:19.161717 | orchestrator |  { 2025-09-19 06:59:19.161728 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 06:59:19.161738 | orchestrator |  "vg_name": "ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1" 2025-09-19 06:59:19.161749 | orchestrator |  } 2025-09-19 06:59:19.161760 | orchestrator |  ] 2025-09-19 06:59:19.161771 | orchestrator |  } 2025-09-19 06:59:19.161782 | orchestrator | } 2025-09-19 06:59:19.161793 | orchestrator | 2025-09-19 06:59:19.161804 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 06:59:19.161815 | orchestrator | 2025-09-19 06:59:19.161825 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 06:59:19.161836 | orchestrator | Friday 19 September 2025 06:59:16 +0000 (0:00:00.302) 0:00:25.925 ****** 2025-09-19 06:59:19.161847 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 06:59:19.161868 | orchestrator | 2025-09-19 06:59:19.161879 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 06:59:19.161890 | orchestrator | Friday 19 September 2025 06:59:16 +0000 (0:00:00.274) 0:00:26.199 ****** 2025-09-19 06:59:19.161900 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:59:19.161911 | orchestrator | 2025-09-19 06:59:19.161922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:19.161933 | orchestrator | Friday 19 September 2025 06:59:16 +0000 (0:00:00.245) 0:00:26.445 ****** 2025-09-19 06:59:19.161944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-19 06:59:19.161954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-19 06:59:19.161965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-19 06:59:19.161976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-19 06:59:19.161986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-19 06:59:19.161997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-19 06:59:19.162008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-19 06:59:19.162074 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-19 06:59:19.162088 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-19 06:59:19.162098 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-19 06:59:19.162109 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-19 06:59:19.162120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-19 06:59:19.162131 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-19 06:59:19.162142 | orchestrator | 2025-09-19 06:59:19.162186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:19.162197 | orchestrator | Friday 19 September 2025 06:59:17 +0000 (0:00:00.428) 0:00:26.873 ****** 2025-09-19 06:59:19.162208 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:19.162219 | orchestrator | 2025-09-19 06:59:19.162230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:19.162241 | orchestrator | Friday 19 September 2025 06:59:17 +0000 (0:00:00.208) 0:00:27.081 ****** 2025-09-19 06:59:19.162252 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:19.162262 | orchestrator | 2025-09-19 06:59:19.162273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:19.162284 | orchestrator | Friday 19 September 2025 06:59:17 +0000 (0:00:00.199) 0:00:27.280 ****** 2025-09-19 06:59:19.162294 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:19.162305 | orchestrator | 2025-09-19 06:59:19.162316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:19.162326 | orchestrator | Friday 19 September 2025 06:59:18 +0000 (0:00:00.656) 0:00:27.936 ****** 2025-09-19 06:59:19.162337 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:19.162348 | orchestrator | 2025-09-19 06:59:19.162358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:19.162369 | orchestrator | Friday 19 September 2025 06:59:18 +0000 (0:00:00.207) 0:00:28.143 ****** 2025-09-19 06:59:19.162380 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:19.162391 | orchestrator | 2025-09-19 06:59:19.162401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:19.162412 | orchestrator | Friday 19 September 2025 06:59:18 +0000 (0:00:00.226) 0:00:28.370 ****** 2025-09-19 06:59:19.162422 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:19.162475 | orchestrator | 2025-09-19 06:59:19.162496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:19.162508 | orchestrator | Friday 19 September 2025 06:59:18 +0000 (0:00:00.218) 0:00:28.589 ****** 2025-09-19 06:59:19.162519 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:19.162530 | orchestrator | 2025-09-19 06:59:19.162550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:29.965392 | orchestrator | Friday 19 September 2025 06:59:19 +0000 (0:00:00.217) 0:00:28.806 ****** 2025-09-19 06:59:29.965531 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.965546 | orchestrator | 2025-09-19 06:59:29.965557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:29.965568 | orchestrator | Friday 19 September 2025 06:59:19 +0000 (0:00:00.203) 0:00:29.009 ****** 2025-09-19 06:59:29.965578 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387) 2025-09-19 06:59:29.965589 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387) 2025-09-19 06:59:29.965598 | orchestrator | 2025-09-19 06:59:29.965608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:29.965618 | orchestrator | Friday 19 September 2025 06:59:19 +0000 (0:00:00.423) 0:00:29.433 ****** 2025-09-19 06:59:29.965627 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c93c054d-d324-48de-9f46-886df7842ff7) 2025-09-19 06:59:29.965637 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c93c054d-d324-48de-9f46-886df7842ff7) 2025-09-19 06:59:29.965647 | orchestrator | 2025-09-19 06:59:29.965656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:29.965666 | orchestrator | Friday 19 September 2025 06:59:20 +0000 (0:00:00.419) 0:00:29.853 ****** 2025-09-19 06:59:29.965676 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_38f6fb83-908a-4dc2-a0dd-a3bb8d4e5dee) 2025-09-19 06:59:29.965685 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_38f6fb83-908a-4dc2-a0dd-a3bb8d4e5dee) 2025-09-19 06:59:29.965695 | orchestrator | 2025-09-19 06:59:29.965704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:29.965714 | orchestrator | Friday 19 September 2025 06:59:20 +0000 (0:00:00.456) 0:00:30.309 ****** 2025-09-19 06:59:29.965723 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b81412c7-c90d-434c-bce7-fcbaa76ae3c0) 2025-09-19 06:59:29.965733 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b81412c7-c90d-434c-bce7-fcbaa76ae3c0) 2025-09-19 06:59:29.965743 | orchestrator | 2025-09-19 06:59:29.965752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:29.965762 | orchestrator | Friday 19 September 2025 06:59:21 +0000 (0:00:00.479) 0:00:30.788 ****** 2025-09-19 06:59:29.965771 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 06:59:29.965781 | orchestrator | 2025-09-19 06:59:29.965790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:29.965800 | orchestrator | Friday 19 September 2025 06:59:21 +0000 (0:00:00.367) 0:00:31.156 ****** 2025-09-19 06:59:29.965809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-19 06:59:29.965820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-19 06:59:29.965829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-19 06:59:29.965839 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-19 06:59:29.965848 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-19 06:59:29.965858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-19 06:59:29.965867 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-19 06:59:29.965898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-19 06:59:29.965908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-19 06:59:29.965919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-19 06:59:29.965930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-19 06:59:29.965941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-19 06:59:29.965951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-19 06:59:29.965962 | orchestrator | 2025-09-19 06:59:29.965987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:29.965998 | orchestrator | Friday 19 September 2025 06:59:22 +0000 (0:00:00.654) 0:00:31.811 ****** 2025-09-19 06:59:29.966009 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.966068 | orchestrator | 2025-09-19 06:59:29.966080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:29.966091 | orchestrator | Friday 19 September 2025 06:59:22 +0000 (0:00:00.210) 0:00:32.022 ****** 2025-09-19 06:59:29.966101 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.966112 | orchestrator | 2025-09-19 06:59:29.966123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:29.966134 | orchestrator | Friday 19 September 2025 06:59:22 +0000 (0:00:00.207) 0:00:32.229 ****** 2025-09-19 06:59:29.966145 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.966155 | orchestrator | 2025-09-19 06:59:29.966166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:29.966177 | orchestrator | Friday 19 September 2025 06:59:22 +0000 (0:00:00.227) 0:00:32.456 ****** 2025-09-19 06:59:29.966188 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.966199 | orchestrator | 2025-09-19 06:59:29.966225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:29.966237 | orchestrator | Friday 19 September 2025 06:59:23 +0000 (0:00:00.217) 0:00:32.673 ****** 2025-09-19 06:59:29.966248 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.966259 | orchestrator | 2025-09-19 06:59:29.966271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:29.966282 | orchestrator | Friday 19 September 2025 06:59:23 +0000 (0:00:00.222) 0:00:32.895 ****** 2025-09-19 06:59:29.966292 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.966301 | orchestrator | 2025-09-19 06:59:29.966311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:29.966320 | orchestrator | Friday 19 September 2025 06:59:23 +0000 (0:00:00.202) 0:00:33.098 ****** 2025-09-19 06:59:29.966330 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.966339 | orchestrator | 2025-09-19 06:59:29.966349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:29.966358 | orchestrator | Friday 19 September 2025 06:59:23 +0000 (0:00:00.198) 0:00:33.297 ****** 2025-09-19 06:59:29.966367 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.966377 | orchestrator | 2025-09-19 06:59:29.966386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:29.966396 | orchestrator | Friday 19 September 2025 06:59:23 +0000 (0:00:00.227) 0:00:33.524 ****** 2025-09-19 06:59:29.966405 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-19 06:59:29.966415 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-19 06:59:29.966424 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-19 06:59:29.966453 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-19 06:59:29.966463 | orchestrator | 2025-09-19 06:59:29.966473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:29.966483 | orchestrator | Friday 19 September 2025 06:59:24 +0000 (0:00:00.985) 0:00:34.510 ****** 2025-09-19 06:59:29.966501 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.966511 | orchestrator | 2025-09-19 06:59:29.966520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:29.966530 | orchestrator | Friday 19 September 2025 06:59:25 +0000 (0:00:00.212) 0:00:34.722 ****** 2025-09-19 06:59:29.966539 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.966549 | orchestrator | 2025-09-19 06:59:29.966558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:29.966568 | orchestrator | Friday 19 September 2025 06:59:25 +0000 (0:00:00.195) 0:00:34.917 ****** 2025-09-19 06:59:29.966578 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.966587 | orchestrator | 2025-09-19 06:59:29.966596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:29.966606 | orchestrator | Friday 19 September 2025 06:59:26 +0000 (0:00:00.771) 0:00:35.689 ****** 2025-09-19 06:59:29.966616 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.966625 | orchestrator | 2025-09-19 06:59:29.966635 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 06:59:29.966644 | orchestrator | Friday 19 September 2025 06:59:26 +0000 (0:00:00.210) 0:00:35.900 ****** 2025-09-19 06:59:29.966659 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.966669 | orchestrator | 2025-09-19 06:59:29.966678 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 06:59:29.966688 | orchestrator | Friday 19 September 2025 06:59:26 +0000 (0:00:00.154) 0:00:36.054 ****** 2025-09-19 06:59:29.966698 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '05a06e17-0162-5722-bf4c-f18a4cab61c7'}}) 2025-09-19 06:59:29.966708 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'caff573e-485a-5d29-90dc-90eefd21fd68'}}) 2025-09-19 06:59:29.966717 | orchestrator | 2025-09-19 06:59:29.966727 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 06:59:29.966736 | orchestrator | Friday 19 September 2025 06:59:26 +0000 (0:00:00.207) 0:00:36.261 ****** 2025-09-19 06:59:29.966747 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'}) 2025-09-19 06:59:29.966757 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'}) 2025-09-19 06:59:29.966767 | orchestrator | 2025-09-19 06:59:29.966777 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 06:59:29.966786 | orchestrator | Friday 19 September 2025 06:59:28 +0000 (0:00:01.866) 0:00:38.128 ****** 2025-09-19 06:59:29.966796 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:29.966806 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:29.966816 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:29.966826 | orchestrator | 2025-09-19 06:59:29.966835 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 06:59:29.966844 | orchestrator | Friday 19 September 2025 06:59:28 +0000 (0:00:00.150) 0:00:38.279 ****** 2025-09-19 06:59:29.966854 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'}) 2025-09-19 06:59:29.966864 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'}) 2025-09-19 06:59:29.966873 | orchestrator | 2025-09-19 06:59:29.966889 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 06:59:35.947359 | orchestrator | Friday 19 September 2025 06:59:29 +0000 (0:00:01.331) 0:00:39.610 ****** 2025-09-19 06:59:35.947534 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:35.947554 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:35.947564 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.947576 | orchestrator | 2025-09-19 06:59:35.947586 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 06:59:35.947596 | orchestrator | Friday 19 September 2025 06:59:30 +0000 (0:00:00.167) 0:00:39.777 ****** 2025-09-19 06:59:35.947606 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.947616 | orchestrator | 2025-09-19 06:59:35.947625 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 06:59:35.947635 | orchestrator | Friday 19 September 2025 06:59:30 +0000 (0:00:00.154) 0:00:39.932 ****** 2025-09-19 06:59:35.947645 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:35.947654 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:35.947663 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.947673 | orchestrator | 2025-09-19 06:59:35.947682 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 06:59:35.947692 | orchestrator | Friday 19 September 2025 06:59:30 +0000 (0:00:00.158) 0:00:40.090 ****** 2025-09-19 06:59:35.947701 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.947711 | orchestrator | 2025-09-19 06:59:35.947720 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 06:59:35.947729 | orchestrator | Friday 19 September 2025 06:59:30 +0000 (0:00:00.155) 0:00:40.246 ****** 2025-09-19 06:59:35.947739 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:35.947748 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:35.947758 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.947767 | orchestrator | 2025-09-19 06:59:35.947777 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 06:59:35.947786 | orchestrator | Friday 19 September 2025 06:59:30 +0000 (0:00:00.161) 0:00:40.407 ****** 2025-09-19 06:59:35.947810 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.947820 | orchestrator | 2025-09-19 06:59:35.947829 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 06:59:35.947838 | orchestrator | Friday 19 September 2025 06:59:31 +0000 (0:00:00.451) 0:00:40.859 ****** 2025-09-19 06:59:35.947848 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:35.947857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:35.947867 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.947876 | orchestrator | 2025-09-19 06:59:35.947886 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 06:59:35.947896 | orchestrator | Friday 19 September 2025 06:59:31 +0000 (0:00:00.143) 0:00:41.003 ****** 2025-09-19 06:59:35.947907 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:59:35.947918 | orchestrator | 2025-09-19 06:59:35.947930 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 06:59:35.947941 | orchestrator | Friday 19 September 2025 06:59:31 +0000 (0:00:00.143) 0:00:41.146 ****** 2025-09-19 06:59:35.947961 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:35.947972 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:35.947984 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.947996 | orchestrator | 2025-09-19 06:59:35.948007 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 06:59:35.948018 | orchestrator | Friday 19 September 2025 06:59:31 +0000 (0:00:00.159) 0:00:41.305 ****** 2025-09-19 06:59:35.948029 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:35.948040 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:35.948050 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.948061 | orchestrator | 2025-09-19 06:59:35.948072 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 06:59:35.948083 | orchestrator | Friday 19 September 2025 06:59:31 +0000 (0:00:00.166) 0:00:41.472 ****** 2025-09-19 06:59:35.948109 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:35.948119 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:35.948129 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.948138 | orchestrator | 2025-09-19 06:59:35.948148 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 06:59:35.948157 | orchestrator | Friday 19 September 2025 06:59:31 +0000 (0:00:00.158) 0:00:41.630 ****** 2025-09-19 06:59:35.948166 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.948176 | orchestrator | 2025-09-19 06:59:35.948185 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 06:59:35.948194 | orchestrator | Friday 19 September 2025 06:59:32 +0000 (0:00:00.142) 0:00:41.773 ****** 2025-09-19 06:59:35.948204 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.948213 | orchestrator | 2025-09-19 06:59:35.948222 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 06:59:35.948232 | orchestrator | Friday 19 September 2025 06:59:32 +0000 (0:00:00.145) 0:00:41.919 ****** 2025-09-19 06:59:35.948241 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.948250 | orchestrator | 2025-09-19 06:59:35.948260 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 06:59:35.948269 | orchestrator | Friday 19 September 2025 06:59:32 +0000 (0:00:00.145) 0:00:42.065 ****** 2025-09-19 06:59:35.948278 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 06:59:35.948288 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 06:59:35.948298 | orchestrator | } 2025-09-19 06:59:35.948307 | orchestrator | 2025-09-19 06:59:35.948317 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 06:59:35.948326 | orchestrator | Friday 19 September 2025 06:59:32 +0000 (0:00:00.140) 0:00:42.205 ****** 2025-09-19 06:59:35.948335 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 06:59:35.948345 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 06:59:35.948354 | orchestrator | } 2025-09-19 06:59:35.948363 | orchestrator | 2025-09-19 06:59:35.948373 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 06:59:35.948382 | orchestrator | Friday 19 September 2025 06:59:32 +0000 (0:00:00.150) 0:00:42.356 ****** 2025-09-19 06:59:35.948391 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 06:59:35.948401 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 06:59:35.948416 | orchestrator | } 2025-09-19 06:59:35.948476 | orchestrator | 2025-09-19 06:59:35.948488 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 06:59:35.948498 | orchestrator | Friday 19 September 2025 06:59:32 +0000 (0:00:00.147) 0:00:42.504 ****** 2025-09-19 06:59:35.948507 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:59:35.948517 | orchestrator | 2025-09-19 06:59:35.948526 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 06:59:35.948536 | orchestrator | Friday 19 September 2025 06:59:33 +0000 (0:00:00.828) 0:00:43.332 ****** 2025-09-19 06:59:35.948545 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:59:35.948555 | orchestrator | 2025-09-19 06:59:35.948564 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 06:59:35.948574 | orchestrator | Friday 19 September 2025 06:59:34 +0000 (0:00:00.624) 0:00:43.956 ****** 2025-09-19 06:59:35.948584 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:59:35.948593 | orchestrator | 2025-09-19 06:59:35.948603 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 06:59:35.948612 | orchestrator | Friday 19 September 2025 06:59:34 +0000 (0:00:00.521) 0:00:44.477 ****** 2025-09-19 06:59:35.948622 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:59:35.948631 | orchestrator | 2025-09-19 06:59:35.948641 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 06:59:35.948650 | orchestrator | Friday 19 September 2025 06:59:34 +0000 (0:00:00.154) 0:00:44.632 ****** 2025-09-19 06:59:35.948660 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.948669 | orchestrator | 2025-09-19 06:59:35.948679 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 06:59:35.948688 | orchestrator | Friday 19 September 2025 06:59:35 +0000 (0:00:00.115) 0:00:44.748 ****** 2025-09-19 06:59:35.948698 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.948707 | orchestrator | 2025-09-19 06:59:35.948717 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 06:59:35.948726 | orchestrator | Friday 19 September 2025 06:59:35 +0000 (0:00:00.130) 0:00:44.878 ****** 2025-09-19 06:59:35.948736 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 06:59:35.948745 | orchestrator |  "vgs_report": { 2025-09-19 06:59:35.948756 | orchestrator |  "vg": [] 2025-09-19 06:59:35.948765 | orchestrator |  } 2025-09-19 06:59:35.948775 | orchestrator | } 2025-09-19 06:59:35.948785 | orchestrator | 2025-09-19 06:59:35.948794 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 06:59:35.948804 | orchestrator | Friday 19 September 2025 06:59:35 +0000 (0:00:00.138) 0:00:45.017 ****** 2025-09-19 06:59:35.948813 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.948823 | orchestrator | 2025-09-19 06:59:35.948832 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 06:59:35.948842 | orchestrator | Friday 19 September 2025 06:59:35 +0000 (0:00:00.147) 0:00:45.165 ****** 2025-09-19 06:59:35.948851 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.948861 | orchestrator | 2025-09-19 06:59:35.948878 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 06:59:35.948888 | orchestrator | Friday 19 September 2025 06:59:35 +0000 (0:00:00.145) 0:00:45.310 ****** 2025-09-19 06:59:35.948897 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.948907 | orchestrator | 2025-09-19 06:59:35.948916 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 06:59:35.948926 | orchestrator | Friday 19 September 2025 06:59:35 +0000 (0:00:00.142) 0:00:45.453 ****** 2025-09-19 06:59:35.948935 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:35.948944 | orchestrator | 2025-09-19 06:59:35.948954 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 06:59:35.948970 | orchestrator | Friday 19 September 2025 06:59:35 +0000 (0:00:00.143) 0:00:45.597 ****** 2025-09-19 06:59:40.682573 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.682671 | orchestrator | 2025-09-19 06:59:40.682710 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 06:59:40.682724 | orchestrator | Friday 19 September 2025 06:59:36 +0000 (0:00:00.135) 0:00:45.732 ****** 2025-09-19 06:59:40.682735 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.682746 | orchestrator | 2025-09-19 06:59:40.682757 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 06:59:40.682768 | orchestrator | Friday 19 September 2025 06:59:36 +0000 (0:00:00.354) 0:00:46.086 ****** 2025-09-19 06:59:40.682779 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.682789 | orchestrator | 2025-09-19 06:59:40.682800 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 06:59:40.682811 | orchestrator | Friday 19 September 2025 06:59:36 +0000 (0:00:00.129) 0:00:46.216 ****** 2025-09-19 06:59:40.682822 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.682833 | orchestrator | 2025-09-19 06:59:40.682843 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 06:59:40.682854 | orchestrator | Friday 19 September 2025 06:59:36 +0000 (0:00:00.134) 0:00:46.350 ****** 2025-09-19 06:59:40.682865 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.682875 | orchestrator | 2025-09-19 06:59:40.682886 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 06:59:40.682897 | orchestrator | Friday 19 September 2025 06:59:36 +0000 (0:00:00.141) 0:00:46.492 ****** 2025-09-19 06:59:40.682907 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.682918 | orchestrator | 2025-09-19 06:59:40.682929 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 06:59:40.682939 | orchestrator | Friday 19 September 2025 06:59:36 +0000 (0:00:00.127) 0:00:46.620 ****** 2025-09-19 06:59:40.682950 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.682961 | orchestrator | 2025-09-19 06:59:40.682971 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 06:59:40.682982 | orchestrator | Friday 19 September 2025 06:59:37 +0000 (0:00:00.122) 0:00:46.742 ****** 2025-09-19 06:59:40.682993 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.683003 | orchestrator | 2025-09-19 06:59:40.683014 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 06:59:40.683043 | orchestrator | Friday 19 September 2025 06:59:37 +0000 (0:00:00.140) 0:00:46.883 ****** 2025-09-19 06:59:40.683054 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.683077 | orchestrator | 2025-09-19 06:59:40.683090 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 06:59:40.683102 | orchestrator | Friday 19 September 2025 06:59:37 +0000 (0:00:00.136) 0:00:47.020 ****** 2025-09-19 06:59:40.683114 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.683127 | orchestrator | 2025-09-19 06:59:40.683140 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 06:59:40.683153 | orchestrator | Friday 19 September 2025 06:59:37 +0000 (0:00:00.165) 0:00:47.185 ****** 2025-09-19 06:59:40.683181 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:40.683196 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:40.683209 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.683221 | orchestrator | 2025-09-19 06:59:40.683233 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 06:59:40.683245 | orchestrator | Friday 19 September 2025 06:59:37 +0000 (0:00:00.157) 0:00:47.343 ****** 2025-09-19 06:59:40.683257 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:40.683269 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:40.683291 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.683302 | orchestrator | 2025-09-19 06:59:40.683312 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 06:59:40.683323 | orchestrator | Friday 19 September 2025 06:59:37 +0000 (0:00:00.154) 0:00:47.498 ****** 2025-09-19 06:59:40.683334 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:40.683345 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:40.683355 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.683366 | orchestrator | 2025-09-19 06:59:40.683377 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 06:59:40.683387 | orchestrator | Friday 19 September 2025 06:59:37 +0000 (0:00:00.143) 0:00:47.642 ****** 2025-09-19 06:59:40.683398 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:40.683409 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:40.683419 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.683450 | orchestrator | 2025-09-19 06:59:40.683461 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 06:59:40.683489 | orchestrator | Friday 19 September 2025 06:59:38 +0000 (0:00:00.360) 0:00:48.002 ****** 2025-09-19 06:59:40.683501 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:40.683512 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:40.683523 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.683534 | orchestrator | 2025-09-19 06:59:40.683544 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 06:59:40.683555 | orchestrator | Friday 19 September 2025 06:59:38 +0000 (0:00:00.166) 0:00:48.168 ****** 2025-09-19 06:59:40.683566 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:40.683577 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:40.683588 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.683599 | orchestrator | 2025-09-19 06:59:40.683610 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 06:59:40.683621 | orchestrator | Friday 19 September 2025 06:59:38 +0000 (0:00:00.158) 0:00:48.326 ****** 2025-09-19 06:59:40.683632 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:40.683642 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:40.683653 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.683664 | orchestrator | 2025-09-19 06:59:40.683675 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 06:59:40.683686 | orchestrator | Friday 19 September 2025 06:59:38 +0000 (0:00:00.169) 0:00:48.496 ****** 2025-09-19 06:59:40.683696 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:40.683714 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:40.683725 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.683736 | orchestrator | 2025-09-19 06:59:40.683752 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 06:59:40.683763 | orchestrator | Friday 19 September 2025 06:59:38 +0000 (0:00:00.150) 0:00:48.646 ****** 2025-09-19 06:59:40.683774 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:59:40.683785 | orchestrator | 2025-09-19 06:59:40.683796 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 06:59:40.683806 | orchestrator | Friday 19 September 2025 06:59:39 +0000 (0:00:00.522) 0:00:49.168 ****** 2025-09-19 06:59:40.683817 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:59:40.683828 | orchestrator | 2025-09-19 06:59:40.683838 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 06:59:40.683849 | orchestrator | Friday 19 September 2025 06:59:40 +0000 (0:00:00.531) 0:00:49.700 ****** 2025-09-19 06:59:40.683860 | orchestrator | ok: [testbed-node-4] 2025-09-19 06:59:40.683870 | orchestrator | 2025-09-19 06:59:40.683881 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 06:59:40.683892 | orchestrator | Friday 19 September 2025 06:59:40 +0000 (0:00:00.151) 0:00:49.851 ****** 2025-09-19 06:59:40.683903 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'vg_name': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'}) 2025-09-19 06:59:40.683915 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'vg_name': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'}) 2025-09-19 06:59:40.683925 | orchestrator | 2025-09-19 06:59:40.683936 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 06:59:40.683947 | orchestrator | Friday 19 September 2025 06:59:40 +0000 (0:00:00.169) 0:00:50.020 ****** 2025-09-19 06:59:40.683957 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:40.683968 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:40.683979 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:40.683990 | orchestrator | 2025-09-19 06:59:40.684001 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 06:59:40.684011 | orchestrator | Friday 19 September 2025 06:59:40 +0000 (0:00:00.157) 0:00:50.178 ****** 2025-09-19 06:59:40.684022 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:40.684033 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:40.684051 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:47.344051 | orchestrator | 2025-09-19 06:59:47.344205 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 06:59:47.344243 | orchestrator | Friday 19 September 2025 06:59:40 +0000 (0:00:00.155) 0:00:50.333 ****** 2025-09-19 06:59:47.344262 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'})  2025-09-19 06:59:47.344283 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'})  2025-09-19 06:59:47.344300 | orchestrator | skipping: [testbed-node-4] 2025-09-19 06:59:47.344319 | orchestrator | 2025-09-19 06:59:47.344337 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 06:59:47.344354 | orchestrator | Friday 19 September 2025 06:59:40 +0000 (0:00:00.162) 0:00:50.495 ****** 2025-09-19 06:59:47.344403 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 06:59:47.344444 | orchestrator |  "lvm_report": { 2025-09-19 06:59:47.344466 | orchestrator |  "lv": [ 2025-09-19 06:59:47.344484 | orchestrator |  { 2025-09-19 06:59:47.344501 | orchestrator |  "lv_name": "osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7", 2025-09-19 06:59:47.344520 | orchestrator |  "vg_name": "ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7" 2025-09-19 06:59:47.344537 | orchestrator |  }, 2025-09-19 06:59:47.344554 | orchestrator |  { 2025-09-19 06:59:47.344573 | orchestrator |  "lv_name": "osd-block-caff573e-485a-5d29-90dc-90eefd21fd68", 2025-09-19 06:59:47.344591 | orchestrator |  "vg_name": "ceph-caff573e-485a-5d29-90dc-90eefd21fd68" 2025-09-19 06:59:47.344607 | orchestrator |  } 2025-09-19 06:59:47.344623 | orchestrator |  ], 2025-09-19 06:59:47.344642 | orchestrator |  "pv": [ 2025-09-19 06:59:47.344658 | orchestrator |  { 2025-09-19 06:59:47.344676 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 06:59:47.344693 | orchestrator |  "vg_name": "ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7" 2025-09-19 06:59:47.344711 | orchestrator |  }, 2025-09-19 06:59:47.344729 | orchestrator |  { 2025-09-19 06:59:47.344748 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 06:59:47.344765 | orchestrator |  "vg_name": "ceph-caff573e-485a-5d29-90dc-90eefd21fd68" 2025-09-19 06:59:47.344783 | orchestrator |  } 2025-09-19 06:59:47.344800 | orchestrator |  ] 2025-09-19 06:59:47.344817 | orchestrator |  } 2025-09-19 06:59:47.344835 | orchestrator | } 2025-09-19 06:59:47.344854 | orchestrator | 2025-09-19 06:59:47.344871 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 06:59:47.344888 | orchestrator | 2025-09-19 06:59:47.344906 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 06:59:47.344924 | orchestrator | Friday 19 September 2025 06:59:41 +0000 (0:00:00.463) 0:00:50.959 ****** 2025-09-19 06:59:47.344941 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 06:59:47.344959 | orchestrator | 2025-09-19 06:59:47.344977 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 06:59:47.344994 | orchestrator | Friday 19 September 2025 06:59:41 +0000 (0:00:00.253) 0:00:51.213 ****** 2025-09-19 06:59:47.345011 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:59:47.345030 | orchestrator | 2025-09-19 06:59:47.345048 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:47.345064 | orchestrator | Friday 19 September 2025 06:59:41 +0000 (0:00:00.252) 0:00:51.465 ****** 2025-09-19 06:59:47.345079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-19 06:59:47.345095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-19 06:59:47.345111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-19 06:59:47.345129 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-19 06:59:47.345145 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-19 06:59:47.345160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-19 06:59:47.345175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-19 06:59:47.345191 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-19 06:59:47.345207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-19 06:59:47.345223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-19 06:59:47.345238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-19 06:59:47.345266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-19 06:59:47.345282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-19 06:59:47.345297 | orchestrator | 2025-09-19 06:59:47.345314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:47.345332 | orchestrator | Friday 19 September 2025 06:59:42 +0000 (0:00:00.441) 0:00:51.907 ****** 2025-09-19 06:59:47.345349 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:47.345372 | orchestrator | 2025-09-19 06:59:47.345391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:47.345407 | orchestrator | Friday 19 September 2025 06:59:42 +0000 (0:00:00.201) 0:00:52.109 ****** 2025-09-19 06:59:47.345454 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:47.345472 | orchestrator | 2025-09-19 06:59:47.345489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:47.345532 | orchestrator | Friday 19 September 2025 06:59:42 +0000 (0:00:00.226) 0:00:52.336 ****** 2025-09-19 06:59:47.345549 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:47.345565 | orchestrator | 2025-09-19 06:59:47.345581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:47.345596 | orchestrator | Friday 19 September 2025 06:59:42 +0000 (0:00:00.205) 0:00:52.541 ****** 2025-09-19 06:59:47.345612 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:47.345628 | orchestrator | 2025-09-19 06:59:47.345643 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:47.345659 | orchestrator | Friday 19 September 2025 06:59:43 +0000 (0:00:00.205) 0:00:52.746 ****** 2025-09-19 06:59:47.345675 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:47.345693 | orchestrator | 2025-09-19 06:59:47.345709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:47.345724 | orchestrator | Friday 19 September 2025 06:59:43 +0000 (0:00:00.217) 0:00:52.964 ****** 2025-09-19 06:59:47.345741 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:47.345756 | orchestrator | 2025-09-19 06:59:47.345771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:47.345786 | orchestrator | Friday 19 September 2025 06:59:44 +0000 (0:00:00.719) 0:00:53.683 ****** 2025-09-19 06:59:47.345802 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:47.345817 | orchestrator | 2025-09-19 06:59:47.345833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:47.345849 | orchestrator | Friday 19 September 2025 06:59:44 +0000 (0:00:00.274) 0:00:53.957 ****** 2025-09-19 06:59:47.345865 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:47.345880 | orchestrator | 2025-09-19 06:59:47.345900 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:47.345920 | orchestrator | Friday 19 September 2025 06:59:44 +0000 (0:00:00.232) 0:00:54.190 ****** 2025-09-19 06:59:47.345936 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf) 2025-09-19 06:59:47.346084 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf) 2025-09-19 06:59:47.346108 | orchestrator | 2025-09-19 06:59:47.346125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:47.346145 | orchestrator | Friday 19 September 2025 06:59:45 +0000 (0:00:00.476) 0:00:54.667 ****** 2025-09-19 06:59:47.346164 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3567b0e7-c22b-4a61-9c89-3afd695b5400) 2025-09-19 06:59:47.346182 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3567b0e7-c22b-4a61-9c89-3afd695b5400) 2025-09-19 06:59:47.346199 | orchestrator | 2025-09-19 06:59:47.346217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:47.346233 | orchestrator | Friday 19 September 2025 06:59:45 +0000 (0:00:00.457) 0:00:55.124 ****** 2025-09-19 06:59:47.346273 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_60eaf991-1ab4-4753-9c6a-a15ff08d271c) 2025-09-19 06:59:47.346292 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_60eaf991-1ab4-4753-9c6a-a15ff08d271c) 2025-09-19 06:59:47.346310 | orchestrator | 2025-09-19 06:59:47.346329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:47.346347 | orchestrator | Friday 19 September 2025 06:59:45 +0000 (0:00:00.423) 0:00:55.547 ****** 2025-09-19 06:59:47.346364 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_efb009a3-4323-4607-93cb-907bed8bb1e3) 2025-09-19 06:59:47.346382 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_efb009a3-4323-4607-93cb-907bed8bb1e3) 2025-09-19 06:59:47.346401 | orchestrator | 2025-09-19 06:59:47.346419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 06:59:47.346552 | orchestrator | Friday 19 September 2025 06:59:46 +0000 (0:00:00.522) 0:00:56.070 ****** 2025-09-19 06:59:47.346570 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 06:59:47.346586 | orchestrator | 2025-09-19 06:59:47.346605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:47.346622 | orchestrator | Friday 19 September 2025 06:59:46 +0000 (0:00:00.396) 0:00:56.466 ****** 2025-09-19 06:59:47.346640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-19 06:59:47.346658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-19 06:59:47.346675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-19 06:59:47.346690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-19 06:59:47.346706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-19 06:59:47.346722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-19 06:59:47.346737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-19 06:59:47.346754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-19 06:59:47.346773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-19 06:59:47.346790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-19 06:59:47.346806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-19 06:59:47.346842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-19 06:59:56.406121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-19 06:59:56.406225 | orchestrator | 2025-09-19 06:59:56.406242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:56.406254 | orchestrator | Friday 19 September 2025 06:59:47 +0000 (0:00:00.524) 0:00:56.991 ****** 2025-09-19 06:59:56.406265 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.406277 | orchestrator | 2025-09-19 06:59:56.406288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:56.406299 | orchestrator | Friday 19 September 2025 06:59:47 +0000 (0:00:00.212) 0:00:57.204 ****** 2025-09-19 06:59:56.406310 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.406321 | orchestrator | 2025-09-19 06:59:56.406332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:56.406343 | orchestrator | Friday 19 September 2025 06:59:47 +0000 (0:00:00.200) 0:00:57.405 ****** 2025-09-19 06:59:56.406354 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.406365 | orchestrator | 2025-09-19 06:59:56.406376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:56.406410 | orchestrator | Friday 19 September 2025 06:59:48 +0000 (0:00:00.756) 0:00:58.161 ****** 2025-09-19 06:59:56.406479 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.406491 | orchestrator | 2025-09-19 06:59:56.406502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:56.406512 | orchestrator | Friday 19 September 2025 06:59:48 +0000 (0:00:00.219) 0:00:58.381 ****** 2025-09-19 06:59:56.406523 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.406534 | orchestrator | 2025-09-19 06:59:56.406544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:56.406555 | orchestrator | Friday 19 September 2025 06:59:48 +0000 (0:00:00.211) 0:00:58.593 ****** 2025-09-19 06:59:56.406566 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.406576 | orchestrator | 2025-09-19 06:59:56.406587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:56.406598 | orchestrator | Friday 19 September 2025 06:59:49 +0000 (0:00:00.212) 0:00:58.805 ****** 2025-09-19 06:59:56.406610 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.406622 | orchestrator | 2025-09-19 06:59:56.406635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:56.406647 | orchestrator | Friday 19 September 2025 06:59:49 +0000 (0:00:00.211) 0:00:59.017 ****** 2025-09-19 06:59:56.406659 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.406671 | orchestrator | 2025-09-19 06:59:56.406683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:56.406695 | orchestrator | Friday 19 September 2025 06:59:49 +0000 (0:00:00.218) 0:00:59.235 ****** 2025-09-19 06:59:56.406707 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-19 06:59:56.406720 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-19 06:59:56.406748 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-19 06:59:56.406760 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-19 06:59:56.406772 | orchestrator | 2025-09-19 06:59:56.406785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:56.406797 | orchestrator | Friday 19 September 2025 06:59:50 +0000 (0:00:00.660) 0:00:59.895 ****** 2025-09-19 06:59:56.406809 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.406820 | orchestrator | 2025-09-19 06:59:56.406832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:56.406845 | orchestrator | Friday 19 September 2025 06:59:50 +0000 (0:00:00.214) 0:01:00.110 ****** 2025-09-19 06:59:56.406856 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.406869 | orchestrator | 2025-09-19 06:59:56.406881 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:56.406894 | orchestrator | Friday 19 September 2025 06:59:50 +0000 (0:00:00.199) 0:01:00.310 ****** 2025-09-19 06:59:56.406906 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.406919 | orchestrator | 2025-09-19 06:59:56.406931 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 06:59:56.406943 | orchestrator | Friday 19 September 2025 06:59:50 +0000 (0:00:00.199) 0:01:00.509 ****** 2025-09-19 06:59:56.406955 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.406967 | orchestrator | 2025-09-19 06:59:56.406979 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 06:59:56.406991 | orchestrator | Friday 19 September 2025 06:59:51 +0000 (0:00:00.198) 0:01:00.707 ****** 2025-09-19 06:59:56.407003 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.407014 | orchestrator | 2025-09-19 06:59:56.407025 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 06:59:56.407036 | orchestrator | Friday 19 September 2025 06:59:51 +0000 (0:00:00.345) 0:01:01.052 ****** 2025-09-19 06:59:56.407046 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd4db71fd-07e0-550b-b185-dcfd36a5307b'}}) 2025-09-19 06:59:56.407058 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0c5dfb3-0a46-5f65-b869-b08108365918'}}) 2025-09-19 06:59:56.407079 | orchestrator | 2025-09-19 06:59:56.407090 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 06:59:56.407100 | orchestrator | Friday 19 September 2025 06:59:51 +0000 (0:00:00.191) 0:01:01.244 ****** 2025-09-19 06:59:56.407113 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'}) 2025-09-19 06:59:56.407125 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'}) 2025-09-19 06:59:56.407136 | orchestrator | 2025-09-19 06:59:56.407146 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 06:59:56.407174 | orchestrator | Friday 19 September 2025 06:59:53 +0000 (0:00:01.815) 0:01:03.059 ****** 2025-09-19 06:59:56.407186 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 06:59:56.407198 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 06:59:56.407209 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.407219 | orchestrator | 2025-09-19 06:59:56.407230 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 06:59:56.407241 | orchestrator | Friday 19 September 2025 06:59:53 +0000 (0:00:00.150) 0:01:03.210 ****** 2025-09-19 06:59:56.407251 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'}) 2025-09-19 06:59:56.407262 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'}) 2025-09-19 06:59:56.407274 | orchestrator | 2025-09-19 06:59:56.407285 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 06:59:56.407295 | orchestrator | Friday 19 September 2025 06:59:54 +0000 (0:00:01.289) 0:01:04.500 ****** 2025-09-19 06:59:56.407306 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 06:59:56.407317 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 06:59:56.407327 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.407338 | orchestrator | 2025-09-19 06:59:56.407349 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 06:59:56.407359 | orchestrator | Friday 19 September 2025 06:59:55 +0000 (0:00:00.156) 0:01:04.657 ****** 2025-09-19 06:59:56.407370 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.407380 | orchestrator | 2025-09-19 06:59:56.407391 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 06:59:56.407401 | orchestrator | Friday 19 September 2025 06:59:55 +0000 (0:00:00.145) 0:01:04.802 ****** 2025-09-19 06:59:56.407412 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 06:59:56.407446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 06:59:56.407458 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.407469 | orchestrator | 2025-09-19 06:59:56.407480 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 06:59:56.407491 | orchestrator | Friday 19 September 2025 06:59:55 +0000 (0:00:00.153) 0:01:04.956 ****** 2025-09-19 06:59:56.407501 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.407519 | orchestrator | 2025-09-19 06:59:56.407529 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 06:59:56.407540 | orchestrator | Friday 19 September 2025 06:59:55 +0000 (0:00:00.142) 0:01:05.098 ****** 2025-09-19 06:59:56.407551 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 06:59:56.407561 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 06:59:56.407572 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.407583 | orchestrator | 2025-09-19 06:59:56.407593 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 06:59:56.407604 | orchestrator | Friday 19 September 2025 06:59:55 +0000 (0:00:00.147) 0:01:05.246 ****** 2025-09-19 06:59:56.407614 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.407625 | orchestrator | 2025-09-19 06:59:56.407636 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 06:59:56.407646 | orchestrator | Friday 19 September 2025 06:59:55 +0000 (0:00:00.163) 0:01:05.410 ****** 2025-09-19 06:59:56.407657 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 06:59:56.407667 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 06:59:56.407678 | orchestrator | skipping: [testbed-node-5] 2025-09-19 06:59:56.407688 | orchestrator | 2025-09-19 06:59:56.407699 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 06:59:56.407710 | orchestrator | Friday 19 September 2025 06:59:55 +0000 (0:00:00.156) 0:01:05.566 ****** 2025-09-19 06:59:56.407720 | orchestrator | ok: [testbed-node-5] 2025-09-19 06:59:56.407731 | orchestrator | 2025-09-19 06:59:56.407741 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 06:59:56.407752 | orchestrator | Friday 19 September 2025 06:59:56 +0000 (0:00:00.319) 0:01:05.886 ****** 2025-09-19 06:59:56.407769 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 07:00:02.462009 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 07:00:02.462144 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462154 | orchestrator | 2025-09-19 07:00:02.462163 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 07:00:02.462171 | orchestrator | Friday 19 September 2025 06:59:56 +0000 (0:00:00.171) 0:01:06.057 ****** 2025-09-19 07:00:02.462178 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 07:00:02.462185 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 07:00:02.462192 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462199 | orchestrator | 2025-09-19 07:00:02.462206 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 07:00:02.462212 | orchestrator | Friday 19 September 2025 06:59:56 +0000 (0:00:00.153) 0:01:06.211 ****** 2025-09-19 07:00:02.462219 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 07:00:02.462225 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 07:00:02.462232 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462257 | orchestrator | 2025-09-19 07:00:02.462264 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 07:00:02.462270 | orchestrator | Friday 19 September 2025 06:59:56 +0000 (0:00:00.155) 0:01:06.366 ****** 2025-09-19 07:00:02.462276 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462282 | orchestrator | 2025-09-19 07:00:02.462289 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 07:00:02.462295 | orchestrator | Friday 19 September 2025 06:59:56 +0000 (0:00:00.138) 0:01:06.505 ****** 2025-09-19 07:00:02.462301 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462307 | orchestrator | 2025-09-19 07:00:02.462314 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 07:00:02.462321 | orchestrator | Friday 19 September 2025 06:59:57 +0000 (0:00:00.155) 0:01:06.661 ****** 2025-09-19 07:00:02.462327 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462333 | orchestrator | 2025-09-19 07:00:02.462339 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 07:00:02.462345 | orchestrator | Friday 19 September 2025 06:59:57 +0000 (0:00:00.145) 0:01:06.806 ****** 2025-09-19 07:00:02.462352 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 07:00:02.462359 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 07:00:02.462365 | orchestrator | } 2025-09-19 07:00:02.462371 | orchestrator | 2025-09-19 07:00:02.462378 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 07:00:02.462385 | orchestrator | Friday 19 September 2025 06:59:57 +0000 (0:00:00.150) 0:01:06.956 ****** 2025-09-19 07:00:02.462392 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 07:00:02.462398 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 07:00:02.462404 | orchestrator | } 2025-09-19 07:00:02.462410 | orchestrator | 2025-09-19 07:00:02.462462 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 07:00:02.462470 | orchestrator | Friday 19 September 2025 06:59:57 +0000 (0:00:00.141) 0:01:07.098 ****** 2025-09-19 07:00:02.462477 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 07:00:02.462483 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 07:00:02.462489 | orchestrator | } 2025-09-19 07:00:02.462494 | orchestrator | 2025-09-19 07:00:02.462500 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 07:00:02.462506 | orchestrator | Friday 19 September 2025 06:59:57 +0000 (0:00:00.147) 0:01:07.245 ****** 2025-09-19 07:00:02.462511 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:00:02.462517 | orchestrator | 2025-09-19 07:00:02.462522 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 07:00:02.462528 | orchestrator | Friday 19 September 2025 06:59:58 +0000 (0:00:00.516) 0:01:07.761 ****** 2025-09-19 07:00:02.462535 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:00:02.462541 | orchestrator | 2025-09-19 07:00:02.462547 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 07:00:02.462552 | orchestrator | Friday 19 September 2025 06:59:58 +0000 (0:00:00.509) 0:01:08.271 ****** 2025-09-19 07:00:02.462558 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:00:02.462564 | orchestrator | 2025-09-19 07:00:02.462569 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 07:00:02.462574 | orchestrator | Friday 19 September 2025 06:59:59 +0000 (0:00:00.701) 0:01:08.972 ****** 2025-09-19 07:00:02.462580 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:00:02.462585 | orchestrator | 2025-09-19 07:00:02.462591 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 07:00:02.462597 | orchestrator | Friday 19 September 2025 06:59:59 +0000 (0:00:00.132) 0:01:09.104 ****** 2025-09-19 07:00:02.462602 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462608 | orchestrator | 2025-09-19 07:00:02.462614 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 07:00:02.462620 | orchestrator | Friday 19 September 2025 06:59:59 +0000 (0:00:00.126) 0:01:09.231 ****** 2025-09-19 07:00:02.462632 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462637 | orchestrator | 2025-09-19 07:00:02.462643 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 07:00:02.462649 | orchestrator | Friday 19 September 2025 06:59:59 +0000 (0:00:00.121) 0:01:09.352 ****** 2025-09-19 07:00:02.462654 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 07:00:02.462660 | orchestrator |  "vgs_report": { 2025-09-19 07:00:02.462666 | orchestrator |  "vg": [] 2025-09-19 07:00:02.462688 | orchestrator |  } 2025-09-19 07:00:02.462694 | orchestrator | } 2025-09-19 07:00:02.462699 | orchestrator | 2025-09-19 07:00:02.462705 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 07:00:02.462711 | orchestrator | Friday 19 September 2025 06:59:59 +0000 (0:00:00.147) 0:01:09.499 ****** 2025-09-19 07:00:02.462716 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462721 | orchestrator | 2025-09-19 07:00:02.462727 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 07:00:02.462732 | orchestrator | Friday 19 September 2025 06:59:59 +0000 (0:00:00.124) 0:01:09.624 ****** 2025-09-19 07:00:02.462738 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462744 | orchestrator | 2025-09-19 07:00:02.462750 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 07:00:02.462756 | orchestrator | Friday 19 September 2025 07:00:00 +0000 (0:00:00.156) 0:01:09.781 ****** 2025-09-19 07:00:02.462761 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462766 | orchestrator | 2025-09-19 07:00:02.462772 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 07:00:02.462777 | orchestrator | Friday 19 September 2025 07:00:00 +0000 (0:00:00.132) 0:01:09.913 ****** 2025-09-19 07:00:02.462783 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462789 | orchestrator | 2025-09-19 07:00:02.462795 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 07:00:02.462817 | orchestrator | Friday 19 September 2025 07:00:00 +0000 (0:00:00.117) 0:01:10.030 ****** 2025-09-19 07:00:02.462823 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462830 | orchestrator | 2025-09-19 07:00:02.462836 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 07:00:02.462842 | orchestrator | Friday 19 September 2025 07:00:00 +0000 (0:00:00.122) 0:01:10.152 ****** 2025-09-19 07:00:02.462848 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462854 | orchestrator | 2025-09-19 07:00:02.462860 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 07:00:02.462866 | orchestrator | Friday 19 September 2025 07:00:00 +0000 (0:00:00.126) 0:01:10.279 ****** 2025-09-19 07:00:02.462872 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462877 | orchestrator | 2025-09-19 07:00:02.462883 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 07:00:02.462889 | orchestrator | Friday 19 September 2025 07:00:00 +0000 (0:00:00.136) 0:01:10.415 ****** 2025-09-19 07:00:02.462894 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462900 | orchestrator | 2025-09-19 07:00:02.462907 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 07:00:02.462913 | orchestrator | Friday 19 September 2025 07:00:00 +0000 (0:00:00.127) 0:01:10.543 ****** 2025-09-19 07:00:02.462919 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462924 | orchestrator | 2025-09-19 07:00:02.462930 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 07:00:02.462940 | orchestrator | Friday 19 September 2025 07:00:01 +0000 (0:00:00.360) 0:01:10.903 ****** 2025-09-19 07:00:02.462946 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462952 | orchestrator | 2025-09-19 07:00:02.462958 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 07:00:02.462964 | orchestrator | Friday 19 September 2025 07:00:01 +0000 (0:00:00.152) 0:01:11.055 ****** 2025-09-19 07:00:02.462970 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.462983 | orchestrator | 2025-09-19 07:00:02.462989 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 07:00:02.462994 | orchestrator | Friday 19 September 2025 07:00:01 +0000 (0:00:00.144) 0:01:11.200 ****** 2025-09-19 07:00:02.463000 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.463006 | orchestrator | 2025-09-19 07:00:02.463012 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 07:00:02.463018 | orchestrator | Friday 19 September 2025 07:00:01 +0000 (0:00:00.130) 0:01:11.330 ****** 2025-09-19 07:00:02.463024 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.463030 | orchestrator | 2025-09-19 07:00:02.463036 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 07:00:02.463042 | orchestrator | Friday 19 September 2025 07:00:01 +0000 (0:00:00.139) 0:01:11.470 ****** 2025-09-19 07:00:02.463048 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.463054 | orchestrator | 2025-09-19 07:00:02.463060 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 07:00:02.463065 | orchestrator | Friday 19 September 2025 07:00:01 +0000 (0:00:00.146) 0:01:11.616 ****** 2025-09-19 07:00:02.463072 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 07:00:02.463079 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 07:00:02.463085 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.463091 | orchestrator | 2025-09-19 07:00:02.463097 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 07:00:02.463103 | orchestrator | Friday 19 September 2025 07:00:02 +0000 (0:00:00.170) 0:01:11.787 ****** 2025-09-19 07:00:02.463109 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 07:00:02.463115 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 07:00:02.463121 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:02.463127 | orchestrator | 2025-09-19 07:00:02.463133 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 07:00:02.463138 | orchestrator | Friday 19 September 2025 07:00:02 +0000 (0:00:00.163) 0:01:11.950 ****** 2025-09-19 07:00:02.463152 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 07:00:05.584679 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 07:00:05.584772 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:05.584784 | orchestrator | 2025-09-19 07:00:05.584792 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 07:00:05.584800 | orchestrator | Friday 19 September 2025 07:00:02 +0000 (0:00:00.163) 0:01:12.114 ****** 2025-09-19 07:00:05.584807 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 07:00:05.584813 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 07:00:05.584820 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:05.584826 | orchestrator | 2025-09-19 07:00:05.584832 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 07:00:05.584839 | orchestrator | Friday 19 September 2025 07:00:02 +0000 (0:00:00.168) 0:01:12.282 ****** 2025-09-19 07:00:05.584845 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 07:00:05.584873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 07:00:05.584879 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:05.584885 | orchestrator | 2025-09-19 07:00:05.584890 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 07:00:05.584896 | orchestrator | Friday 19 September 2025 07:00:02 +0000 (0:00:00.165) 0:01:12.448 ****** 2025-09-19 07:00:05.584902 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 07:00:05.584908 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 07:00:05.584914 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:05.584920 | orchestrator | 2025-09-19 07:00:05.584939 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 07:00:05.584945 | orchestrator | Friday 19 September 2025 07:00:02 +0000 (0:00:00.182) 0:01:12.631 ****** 2025-09-19 07:00:05.584951 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 07:00:05.584956 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 07:00:05.584963 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:05.584968 | orchestrator | 2025-09-19 07:00:05.584974 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 07:00:05.584980 | orchestrator | Friday 19 September 2025 07:00:03 +0000 (0:00:00.357) 0:01:12.989 ****** 2025-09-19 07:00:05.584986 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 07:00:05.584991 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 07:00:05.584996 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:05.585002 | orchestrator | 2025-09-19 07:00:05.585008 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 07:00:05.585013 | orchestrator | Friday 19 September 2025 07:00:03 +0000 (0:00:00.165) 0:01:13.154 ****** 2025-09-19 07:00:05.585019 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:00:05.585025 | orchestrator | 2025-09-19 07:00:05.585031 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 07:00:05.585037 | orchestrator | Friday 19 September 2025 07:00:03 +0000 (0:00:00.498) 0:01:13.653 ****** 2025-09-19 07:00:05.585043 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:00:05.585049 | orchestrator | 2025-09-19 07:00:05.585054 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 07:00:05.585060 | orchestrator | Friday 19 September 2025 07:00:04 +0000 (0:00:00.560) 0:01:14.213 ****** 2025-09-19 07:00:05.585066 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:00:05.585071 | orchestrator | 2025-09-19 07:00:05.585076 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 07:00:05.585083 | orchestrator | Friday 19 September 2025 07:00:04 +0000 (0:00:00.151) 0:01:14.365 ****** 2025-09-19 07:00:05.585088 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'vg_name': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'}) 2025-09-19 07:00:05.585095 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'vg_name': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'}) 2025-09-19 07:00:05.585101 | orchestrator | 2025-09-19 07:00:05.585107 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 07:00:05.585120 | orchestrator | Friday 19 September 2025 07:00:04 +0000 (0:00:00.188) 0:01:14.553 ****** 2025-09-19 07:00:05.585141 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 07:00:05.585147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 07:00:05.585153 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:05.585159 | orchestrator | 2025-09-19 07:00:05.585165 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 07:00:05.585171 | orchestrator | Friday 19 September 2025 07:00:05 +0000 (0:00:00.160) 0:01:14.714 ****** 2025-09-19 07:00:05.585176 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 07:00:05.585182 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 07:00:05.585188 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:05.585194 | orchestrator | 2025-09-19 07:00:05.585199 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 07:00:05.585205 | orchestrator | Friday 19 September 2025 07:00:05 +0000 (0:00:00.173) 0:01:14.888 ****** 2025-09-19 07:00:05.585210 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'})  2025-09-19 07:00:05.585216 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'})  2025-09-19 07:00:05.585222 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:05.585227 | orchestrator | 2025-09-19 07:00:05.585232 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 07:00:05.585239 | orchestrator | Friday 19 September 2025 07:00:05 +0000 (0:00:00.170) 0:01:15.058 ****** 2025-09-19 07:00:05.585245 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 07:00:05.585251 | orchestrator |  "lvm_report": { 2025-09-19 07:00:05.585258 | orchestrator |  "lv": [ 2025-09-19 07:00:05.585264 | orchestrator |  { 2025-09-19 07:00:05.585270 | orchestrator |  "lv_name": "osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918", 2025-09-19 07:00:05.585281 | orchestrator |  "vg_name": "ceph-a0c5dfb3-0a46-5f65-b869-b08108365918" 2025-09-19 07:00:05.585287 | orchestrator |  }, 2025-09-19 07:00:05.585293 | orchestrator |  { 2025-09-19 07:00:05.585299 | orchestrator |  "lv_name": "osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b", 2025-09-19 07:00:05.585305 | orchestrator |  "vg_name": "ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b" 2025-09-19 07:00:05.585311 | orchestrator |  } 2025-09-19 07:00:05.585316 | orchestrator |  ], 2025-09-19 07:00:05.585322 | orchestrator |  "pv": [ 2025-09-19 07:00:05.585328 | orchestrator |  { 2025-09-19 07:00:05.585334 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 07:00:05.585340 | orchestrator |  "vg_name": "ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b" 2025-09-19 07:00:05.585346 | orchestrator |  }, 2025-09-19 07:00:05.585352 | orchestrator |  { 2025-09-19 07:00:05.585357 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 07:00:05.585363 | orchestrator |  "vg_name": "ceph-a0c5dfb3-0a46-5f65-b869-b08108365918" 2025-09-19 07:00:05.585369 | orchestrator |  } 2025-09-19 07:00:05.585376 | orchestrator |  ] 2025-09-19 07:00:05.585381 | orchestrator |  } 2025-09-19 07:00:05.585388 | orchestrator | } 2025-09-19 07:00:05.585393 | orchestrator | 2025-09-19 07:00:05.585399 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:00:05.585409 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 07:00:05.585414 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 07:00:05.585457 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 07:00:05.585465 | orchestrator | 2025-09-19 07:00:05.585470 | orchestrator | 2025-09-19 07:00:05.585476 | orchestrator | 2025-09-19 07:00:05.585481 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:00:05.585487 | orchestrator | Friday 19 September 2025 07:00:05 +0000 (0:00:00.156) 0:01:15.215 ****** 2025-09-19 07:00:05.585493 | orchestrator | =============================================================================== 2025-09-19 07:00:05.585498 | orchestrator | Create block VGs -------------------------------------------------------- 5.81s 2025-09-19 07:00:05.585504 | orchestrator | Create block LVs -------------------------------------------------------- 4.10s 2025-09-19 07:00:05.585509 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.00s 2025-09-19 07:00:05.585515 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.74s 2025-09-19 07:00:05.585520 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.66s 2025-09-19 07:00:05.585526 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.63s 2025-09-19 07:00:05.585532 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.58s 2025-09-19 07:00:05.585537 | orchestrator | Add known partitions to the list of available block devices ------------- 1.57s 2025-09-19 07:00:05.585549 | orchestrator | Add known links to the list of available block devices ------------------ 1.27s 2025-09-19 07:00:05.974692 | orchestrator | Add known partitions to the list of available block devices ------------- 1.18s 2025-09-19 07:00:05.974777 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-09-19 07:00:05.974786 | orchestrator | Print LVM report data --------------------------------------------------- 0.92s 2025-09-19 07:00:05.974793 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2025-09-19 07:00:05.974802 | orchestrator | Create DB+WAL VGs ------------------------------------------------------- 0.79s 2025-09-19 07:00:05.974808 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2025-09-19 07:00:05.974814 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-09-19 07:00:05.974820 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.75s 2025-09-19 07:00:05.974826 | orchestrator | Get initial list of available block devices ----------------------------- 0.73s 2025-09-19 07:00:05.974832 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-09-19 07:00:05.974839 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-09-19 07:00:18.084617 | orchestrator | 2025-09-19 07:00:18 | INFO  | Task e54d4381-bdf4-4939-8523-e9597798028c (facts) was prepared for execution. 2025-09-19 07:00:18.084755 | orchestrator | 2025-09-19 07:00:18 | INFO  | It takes a moment until task e54d4381-bdf4-4939-8523-e9597798028c (facts) has been started and output is visible here. 2025-09-19 07:00:29.865038 | orchestrator | 2025-09-19 07:00:29.865119 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 07:00:29.865128 | orchestrator | 2025-09-19 07:00:29.865134 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 07:00:29.865140 | orchestrator | Friday 19 September 2025 07:00:21 +0000 (0:00:00.223) 0:00:00.223 ****** 2025-09-19 07:00:29.865145 | orchestrator | ok: [testbed-manager] 2025-09-19 07:00:29.865151 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:00:29.865173 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:00:29.865178 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:00:29.865183 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:00:29.865188 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:00:29.865193 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:00:29.865198 | orchestrator | 2025-09-19 07:00:29.865203 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 07:00:29.865208 | orchestrator | Friday 19 September 2025 07:00:22 +0000 (0:00:00.976) 0:00:01.199 ****** 2025-09-19 07:00:29.865224 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:00:29.865230 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:00:29.865236 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:00:29.865241 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:00:29.865246 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:00:29.865251 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:00:29.865256 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:29.865261 | orchestrator | 2025-09-19 07:00:29.865266 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 07:00:29.865271 | orchestrator | 2025-09-19 07:00:29.865276 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 07:00:29.865282 | orchestrator | Friday 19 September 2025 07:00:24 +0000 (0:00:01.120) 0:00:02.319 ****** 2025-09-19 07:00:29.865287 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:00:29.865292 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:00:29.865297 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:00:29.865302 | orchestrator | ok: [testbed-manager] 2025-09-19 07:00:29.865307 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:00:29.865311 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:00:29.865316 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:00:29.865321 | orchestrator | 2025-09-19 07:00:29.865327 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 07:00:29.865331 | orchestrator | 2025-09-19 07:00:29.865337 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 07:00:29.865342 | orchestrator | Friday 19 September 2025 07:00:28 +0000 (0:00:04.898) 0:00:07.217 ****** 2025-09-19 07:00:29.865347 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:00:29.865352 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:00:29.865357 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:00:29.865362 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:00:29.865387 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:00:29.865393 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:00:29.865398 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:00:29.865403 | orchestrator | 2025-09-19 07:00:29.865428 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:00:29.865435 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:00:29.865441 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:00:29.865446 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:00:29.865451 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:00:29.865456 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:00:29.865461 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:00:29.865466 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:00:29.865479 | orchestrator | 2025-09-19 07:00:29.865484 | orchestrator | 2025-09-19 07:00:29.865489 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:00:29.865494 | orchestrator | Friday 19 September 2025 07:00:29 +0000 (0:00:00.502) 0:00:07.720 ****** 2025-09-19 07:00:29.865499 | orchestrator | =============================================================================== 2025-09-19 07:00:29.865505 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.90s 2025-09-19 07:00:29.865510 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.12s 2025-09-19 07:00:29.865515 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.98s 2025-09-19 07:00:29.865520 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-09-19 07:00:42.124385 | orchestrator | 2025-09-19 07:00:42 | INFO  | Task f07af2c0-1692-4a3d-95fd-8b61079070d4 (frr) was prepared for execution. 2025-09-19 07:00:42.124531 | orchestrator | 2025-09-19 07:00:42 | INFO  | It takes a moment until task f07af2c0-1692-4a3d-95fd-8b61079070d4 (frr) has been started and output is visible here. 2025-09-19 07:01:09.614829 | orchestrator | 2025-09-19 07:01:09.614915 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-19 07:01:09.614926 | orchestrator | 2025-09-19 07:01:09.614933 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-19 07:01:09.614940 | orchestrator | Friday 19 September 2025 07:00:46 +0000 (0:00:00.240) 0:00:00.240 ****** 2025-09-19 07:01:09.614947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 07:01:09.614955 | orchestrator | 2025-09-19 07:01:09.614961 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-19 07:01:09.614971 | orchestrator | Friday 19 September 2025 07:00:46 +0000 (0:00:00.239) 0:00:00.480 ****** 2025-09-19 07:01:09.614982 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:09.614994 | orchestrator | 2025-09-19 07:01:09.615005 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-19 07:01:09.615016 | orchestrator | Friday 19 September 2025 07:00:47 +0000 (0:00:01.144) 0:00:01.625 ****** 2025-09-19 07:01:09.615027 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:09.615038 | orchestrator | 2025-09-19 07:01:09.615049 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-19 07:01:09.615060 | orchestrator | Friday 19 September 2025 07:00:57 +0000 (0:00:09.762) 0:00:11.387 ****** 2025-09-19 07:01:09.615071 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:09.615083 | orchestrator | 2025-09-19 07:01:09.615094 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-19 07:01:09.615105 | orchestrator | Friday 19 September 2025 07:00:58 +0000 (0:00:01.503) 0:00:12.891 ****** 2025-09-19 07:01:09.615115 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:09.615126 | orchestrator | 2025-09-19 07:01:09.615137 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-19 07:01:09.615148 | orchestrator | Friday 19 September 2025 07:00:59 +0000 (0:00:01.089) 0:00:13.980 ****** 2025-09-19 07:01:09.615158 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:09.615169 | orchestrator | 2025-09-19 07:01:09.615201 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-19 07:01:09.615213 | orchestrator | Friday 19 September 2025 07:01:01 +0000 (0:00:01.391) 0:00:15.371 ****** 2025-09-19 07:01:09.615224 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:01:09.615235 | orchestrator | 2025-09-19 07:01:09.615246 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-19 07:01:09.615257 | orchestrator | Friday 19 September 2025 07:01:02 +0000 (0:00:00.830) 0:00:16.202 ****** 2025-09-19 07:01:09.615267 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:01:09.615278 | orchestrator | 2025-09-19 07:01:09.615289 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-19 07:01:09.615324 | orchestrator | Friday 19 September 2025 07:01:02 +0000 (0:00:00.153) 0:00:16.355 ****** 2025-09-19 07:01:09.615336 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:09.615347 | orchestrator | 2025-09-19 07:01:09.615358 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-19 07:01:09.615369 | orchestrator | Friday 19 September 2025 07:01:03 +0000 (0:00:01.031) 0:00:17.387 ****** 2025-09-19 07:01:09.615380 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-19 07:01:09.615391 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-19 07:01:09.615452 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-19 07:01:09.615465 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-19 07:01:09.615479 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-19 07:01:09.615492 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-19 07:01:09.615504 | orchestrator | 2025-09-19 07:01:09.615518 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-19 07:01:09.615531 | orchestrator | Friday 19 September 2025 07:01:06 +0000 (0:00:03.223) 0:00:20.611 ****** 2025-09-19 07:01:09.615544 | orchestrator | ok: [testbed-manager] 2025-09-19 07:01:09.615556 | orchestrator | 2025-09-19 07:01:09.615569 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-19 07:01:09.615583 | orchestrator | Friday 19 September 2025 07:01:07 +0000 (0:00:01.422) 0:00:22.034 ****** 2025-09-19 07:01:09.615595 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:09.615608 | orchestrator | 2025-09-19 07:01:09.615621 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:01:09.615634 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 07:01:09.615647 | orchestrator | 2025-09-19 07:01:09.615659 | orchestrator | 2025-09-19 07:01:09.615672 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:01:09.615685 | orchestrator | Friday 19 September 2025 07:01:09 +0000 (0:00:01.408) 0:00:23.442 ****** 2025-09-19 07:01:09.615698 | orchestrator | =============================================================================== 2025-09-19 07:01:09.615710 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.76s 2025-09-19 07:01:09.615723 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.22s 2025-09-19 07:01:09.615736 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.50s 2025-09-19 07:01:09.615749 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.42s 2025-09-19 07:01:09.615777 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.41s 2025-09-19 07:01:09.615789 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.39s 2025-09-19 07:01:09.615800 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.14s 2025-09-19 07:01:09.615811 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.09s 2025-09-19 07:01:09.615821 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 1.03s 2025-09-19 07:01:09.615832 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.83s 2025-09-19 07:01:09.615843 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.24s 2025-09-19 07:01:09.615854 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.15s 2025-09-19 07:01:09.907872 | orchestrator | 2025-09-19 07:01:09.911726 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Sep 19 07:01:09 UTC 2025 2025-09-19 07:01:09.911837 | orchestrator | 2025-09-19 07:01:11.764885 | orchestrator | 2025-09-19 07:01:11 | INFO  | Collection nutshell is prepared for execution 2025-09-19 07:01:11.764986 | orchestrator | 2025-09-19 07:01:11 | INFO  | D [0] - dotfiles 2025-09-19 07:01:21.877179 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [0] - homer 2025-09-19 07:01:21.877283 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [0] - netdata 2025-09-19 07:01:21.877503 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [0] - openstackclient 2025-09-19 07:01:21.877837 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [0] - phpmyadmin 2025-09-19 07:01:21.878600 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [0] - common 2025-09-19 07:01:21.882770 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [1] -- loadbalancer 2025-09-19 07:01:21.883181 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [2] --- opensearch 2025-09-19 07:01:21.883958 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [2] --- mariadb-ng 2025-09-19 07:01:21.884221 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [3] ---- horizon 2025-09-19 07:01:21.884597 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [3] ---- keystone 2025-09-19 07:01:21.885126 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [4] ----- neutron 2025-09-19 07:01:21.885522 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [5] ------ wait-for-nova 2025-09-19 07:01:21.885951 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [5] ------ octavia 2025-09-19 07:01:21.887376 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [4] ----- barbican 2025-09-19 07:01:21.887660 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [4] ----- designate 2025-09-19 07:01:21.887907 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [4] ----- ironic 2025-09-19 07:01:21.888325 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [4] ----- placement 2025-09-19 07:01:21.888526 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [4] ----- magnum 2025-09-19 07:01:21.889342 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [1] -- openvswitch 2025-09-19 07:01:21.889764 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [2] --- ovn 2025-09-19 07:01:21.890519 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [1] -- memcached 2025-09-19 07:01:21.890547 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [1] -- redis 2025-09-19 07:01:21.890787 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [1] -- rabbitmq-ng 2025-09-19 07:01:21.891278 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [0] - kubernetes 2025-09-19 07:01:21.893568 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [1] -- kubeconfig 2025-09-19 07:01:21.894466 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [1] -- copy-kubeconfig 2025-09-19 07:01:21.894490 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [0] - ceph 2025-09-19 07:01:21.896751 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [1] -- ceph-pools 2025-09-19 07:01:21.897755 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [2] --- copy-ceph-keys 2025-09-19 07:01:21.897780 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [3] ---- cephclient 2025-09-19 07:01:21.897791 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-19 07:01:21.897802 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [4] ----- wait-for-keystone 2025-09-19 07:01:21.897813 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-19 07:01:21.897824 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [5] ------ glance 2025-09-19 07:01:21.897835 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [5] ------ cinder 2025-09-19 07:01:21.897851 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [5] ------ nova 2025-09-19 07:01:21.898293 | orchestrator | 2025-09-19 07:01:21 | INFO  | A [4] ----- prometheus 2025-09-19 07:01:21.898316 | orchestrator | 2025-09-19 07:01:21 | INFO  | D [5] ------ grafana 2025-09-19 07:01:22.080345 | orchestrator | 2025-09-19 07:01:22 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-19 07:01:22.080502 | orchestrator | 2025-09-19 07:01:22 | INFO  | Tasks are running in the background 2025-09-19 07:01:25.161313 | orchestrator | 2025-09-19 07:01:25 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-19 07:01:27.266560 | orchestrator | 2025-09-19 07:01:27 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:01:27.267051 | orchestrator | 2025-09-19 07:01:27 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:01:27.267258 | orchestrator | 2025-09-19 07:01:27 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:01:27.267898 | orchestrator | 2025-09-19 07:01:27 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:01:27.268584 | orchestrator | 2025-09-19 07:01:27 | INFO  | Task 4b9b530b-c01f-44b6-be58-3c8e8673bd4a is in state STARTED 2025-09-19 07:01:27.269245 | orchestrator | 2025-09-19 07:01:27 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:01:27.270080 | orchestrator | 2025-09-19 07:01:27 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:01:27.270126 | orchestrator | 2025-09-19 07:01:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:30.308158 | orchestrator | 2025-09-19 07:01:30 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:01:30.308769 | orchestrator | 2025-09-19 07:01:30 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:01:30.309126 | orchestrator | 2025-09-19 07:01:30 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:01:30.309763 | orchestrator | 2025-09-19 07:01:30 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:01:30.315096 | orchestrator | 2025-09-19 07:01:30 | INFO  | Task 4b9b530b-c01f-44b6-be58-3c8e8673bd4a is in state STARTED 2025-09-19 07:01:30.315623 | orchestrator | 2025-09-19 07:01:30 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:01:30.316139 | orchestrator | 2025-09-19 07:01:30 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:01:30.316162 | orchestrator | 2025-09-19 07:01:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:33.357452 | orchestrator | 2025-09-19 07:01:33 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:01:33.360460 | orchestrator | 2025-09-19 07:01:33 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:01:33.368211 | orchestrator | 2025-09-19 07:01:33 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:01:33.368734 | orchestrator | 2025-09-19 07:01:33 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:01:33.370500 | orchestrator | 2025-09-19 07:01:33 | INFO  | Task 4b9b530b-c01f-44b6-be58-3c8e8673bd4a is in state STARTED 2025-09-19 07:01:33.370867 | orchestrator | 2025-09-19 07:01:33 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:01:33.371615 | orchestrator | 2025-09-19 07:01:33 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:01:33.371638 | orchestrator | 2025-09-19 07:01:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:36.555869 | orchestrator | 2025-09-19 07:01:36 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:01:36.555945 | orchestrator | 2025-09-19 07:01:36 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:01:36.555957 | orchestrator | 2025-09-19 07:01:36 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:01:36.555967 | orchestrator | 2025-09-19 07:01:36 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:01:36.555977 | orchestrator | 2025-09-19 07:01:36 | INFO  | Task 4b9b530b-c01f-44b6-be58-3c8e8673bd4a is in state STARTED 2025-09-19 07:01:36.555987 | orchestrator | 2025-09-19 07:01:36 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:01:36.555997 | orchestrator | 2025-09-19 07:01:36 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:01:36.556007 | orchestrator | 2025-09-19 07:01:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:39.639499 | orchestrator | 2025-09-19 07:01:39 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:01:39.639589 | orchestrator | 2025-09-19 07:01:39 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:01:39.639604 | orchestrator | 2025-09-19 07:01:39 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:01:39.639616 | orchestrator | 2025-09-19 07:01:39 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:01:39.643278 | orchestrator | 2025-09-19 07:01:39 | INFO  | Task 4b9b530b-c01f-44b6-be58-3c8e8673bd4a is in state STARTED 2025-09-19 07:01:39.646600 | orchestrator | 2025-09-19 07:01:39 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:01:39.646667 | orchestrator | 2025-09-19 07:01:39 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:01:39.646681 | orchestrator | 2025-09-19 07:01:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:42.690993 | orchestrator | 2025-09-19 07:01:42 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:01:42.691537 | orchestrator | 2025-09-19 07:01:42 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:01:42.695505 | orchestrator | 2025-09-19 07:01:42 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:01:42.696236 | orchestrator | 2025-09-19 07:01:42 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:01:42.696266 | orchestrator | 2025-09-19 07:01:42 | INFO  | Task 4b9b530b-c01f-44b6-be58-3c8e8673bd4a is in state STARTED 2025-09-19 07:01:42.696796 | orchestrator | 2025-09-19 07:01:42 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:01:42.698954 | orchestrator | 2025-09-19 07:01:42 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:01:42.698992 | orchestrator | 2025-09-19 07:01:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:45.854447 | orchestrator | 2025-09-19 07:01:45 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:01:45.854538 | orchestrator | 2025-09-19 07:01:45 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:01:45.856060 | orchestrator | 2025-09-19 07:01:45 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:01:45.857726 | orchestrator | 2025-09-19 07:01:45 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:01:45.858460 | orchestrator | 2025-09-19 07:01:45 | INFO  | Task 4b9b530b-c01f-44b6-be58-3c8e8673bd4a is in state STARTED 2025-09-19 07:01:45.860260 | orchestrator | 2025-09-19 07:01:45 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:01:45.863569 | orchestrator | 2025-09-19 07:01:45 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:01:45.864464 | orchestrator | 2025-09-19 07:01:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:48.940899 | orchestrator | 2025-09-19 07:01:48 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:01:48.941978 | orchestrator | 2025-09-19 07:01:48 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:01:48.943162 | orchestrator | 2025-09-19 07:01:48 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:01:48.944896 | orchestrator | 2025-09-19 07:01:48 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:01:48.946207 | orchestrator | 2025-09-19 07:01:48 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:01:48.946906 | orchestrator | 2025-09-19 07:01:48 | INFO  | Task 4b9b530b-c01f-44b6-be58-3c8e8673bd4a is in state SUCCESS 2025-09-19 07:01:48.947468 | orchestrator | 2025-09-19 07:01:48.947494 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-19 07:01:48.947505 | orchestrator | 2025-09-19 07:01:48.947515 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-19 07:01:48.947525 | orchestrator | Friday 19 September 2025 07:01:32 +0000 (0:00:00.453) 0:00:00.453 ****** 2025-09-19 07:01:48.947535 | orchestrator | changed: [testbed-manager] 2025-09-19 07:01:48.947545 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:01:48.947554 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:01:48.947563 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:01:48.947573 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:01:48.947582 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:01:48.947592 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:01:48.947602 | orchestrator | 2025-09-19 07:01:48.947611 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-19 07:01:48.947621 | orchestrator | Friday 19 September 2025 07:01:37 +0000 (0:00:04.515) 0:00:04.969 ****** 2025-09-19 07:01:48.947631 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 07:01:48.947641 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-19 07:01:48.947651 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 07:01:48.947660 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 07:01:48.947670 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 07:01:48.947681 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 07:01:48.947691 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 07:01:48.947700 | orchestrator | 2025-09-19 07:01:48.947710 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-19 07:01:48.947720 | orchestrator | Friday 19 September 2025 07:01:39 +0000 (0:00:02.070) 0:00:07.039 ****** 2025-09-19 07:01:48.947740 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 07:01:37.954258', 'end': '2025-09-19 07:01:37.980313', 'delta': '0:00:00.026055', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 07:01:48.947772 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 07:01:37.978821', 'end': '2025-09-19 07:01:38.990976', 'delta': '0:00:01.012155', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 07:01:48.947783 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 07:01:37.980786', 'end': '2025-09-19 07:01:37.989046', 'delta': '0:00:00.008260', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 07:01:48.947813 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 07:01:38.093766', 'end': '2025-09-19 07:01:38.102437', 'delta': '0:00:00.008671', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 07:01:48.947824 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 07:01:38.248956', 'end': '2025-09-19 07:01:38.258635', 'delta': '0:00:00.009679', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 07:01:48.948067 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 07:01:38.415795', 'end': '2025-09-19 07:01:38.426016', 'delta': '0:00:00.010221', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 07:01:48.948093 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 07:01:38.573167', 'end': '2025-09-19 07:01:38.581735', 'delta': '0:00:00.008568', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 07:01:48.948106 | orchestrator | 2025-09-19 07:01:48.948117 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-19 07:01:48.948128 | orchestrator | Friday 19 September 2025 07:01:41 +0000 (0:00:02.464) 0:00:09.503 ****** 2025-09-19 07:01:48.948139 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 07:01:48.948150 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 07:01:48.948161 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 07:01:48.948172 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-19 07:01:48.948183 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 07:01:48.948194 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 07:01:48.948205 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 07:01:48.948217 | orchestrator | 2025-09-19 07:01:48.948228 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-19 07:01:48.948241 | orchestrator | Friday 19 September 2025 07:01:43 +0000 (0:00:01.396) 0:00:10.900 ****** 2025-09-19 07:01:48.948252 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-19 07:01:48.948263 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 07:01:48.948274 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 07:01:48.948286 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 07:01:48.948297 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 07:01:48.948308 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 07:01:48.948319 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 07:01:48.948330 | orchestrator | 2025-09-19 07:01:48.948341 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:01:48.948359 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:48.948371 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:48.948407 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:48.948418 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:48.948428 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:48.948437 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:48.948453 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:01:48.948463 | orchestrator | 2025-09-19 07:01:48.948473 | orchestrator | 2025-09-19 07:01:48.948482 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:01:48.948492 | orchestrator | Friday 19 September 2025 07:01:47 +0000 (0:00:04.278) 0:00:15.178 ****** 2025-09-19 07:01:48.948502 | orchestrator | =============================================================================== 2025-09-19 07:01:48.948511 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.52s 2025-09-19 07:01:48.948619 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.28s 2025-09-19 07:01:48.948636 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.46s 2025-09-19 07:01:48.948646 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.07s 2025-09-19 07:01:48.948661 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.40s 2025-09-19 07:01:48.948676 | orchestrator | 2025-09-19 07:01:48 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:01:48.950259 | orchestrator | 2025-09-19 07:01:48 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:01:48.950450 | orchestrator | 2025-09-19 07:01:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:52.067962 | orchestrator | 2025-09-19 07:01:52 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:01:52.068234 | orchestrator | 2025-09-19 07:01:52 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:01:52.068265 | orchestrator | 2025-09-19 07:01:52 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:01:52.068285 | orchestrator | 2025-09-19 07:01:52 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:01:52.068306 | orchestrator | 2025-09-19 07:01:52 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:01:52.068325 | orchestrator | 2025-09-19 07:01:52 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:01:52.068357 | orchestrator | 2025-09-19 07:01:52 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:01:52.068377 | orchestrator | 2025-09-19 07:01:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:55.099488 | orchestrator | 2025-09-19 07:01:55 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:01:55.099567 | orchestrator | 2025-09-19 07:01:55 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:01:55.099581 | orchestrator | 2025-09-19 07:01:55 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:01:55.102143 | orchestrator | 2025-09-19 07:01:55 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:01:55.102656 | orchestrator | 2025-09-19 07:01:55 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:01:55.103228 | orchestrator | 2025-09-19 07:01:55 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:01:55.103916 | orchestrator | 2025-09-19 07:01:55 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:01:55.103943 | orchestrator | 2025-09-19 07:01:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:01:58.168782 | orchestrator | 2025-09-19 07:01:58 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:01:58.169296 | orchestrator | 2025-09-19 07:01:58 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:01:58.169787 | orchestrator | 2025-09-19 07:01:58 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:01:58.170676 | orchestrator | 2025-09-19 07:01:58 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:01:58.175804 | orchestrator | 2025-09-19 07:01:58 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:01:58.176325 | orchestrator | 2025-09-19 07:01:58 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:01:58.177107 | orchestrator | 2025-09-19 07:01:58 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:01:58.177134 | orchestrator | 2025-09-19 07:01:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:01.281428 | orchestrator | 2025-09-19 07:02:01 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:02:01.282246 | orchestrator | 2025-09-19 07:02:01 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:01.284633 | orchestrator | 2025-09-19 07:02:01 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:02:01.285439 | orchestrator | 2025-09-19 07:02:01 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:01.288026 | orchestrator | 2025-09-19 07:02:01 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:01.288053 | orchestrator | 2025-09-19 07:02:01 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:02:01.288152 | orchestrator | 2025-09-19 07:02:01 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:01.289102 | orchestrator | 2025-09-19 07:02:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:04.416643 | orchestrator | 2025-09-19 07:02:04 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:02:04.417946 | orchestrator | 2025-09-19 07:02:04 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:04.420406 | orchestrator | 2025-09-19 07:02:04 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:02:04.421894 | orchestrator | 2025-09-19 07:02:04 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:04.424585 | orchestrator | 2025-09-19 07:02:04 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:04.426082 | orchestrator | 2025-09-19 07:02:04 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:02:04.428274 | orchestrator | 2025-09-19 07:02:04 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:04.428304 | orchestrator | 2025-09-19 07:02:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:07.496949 | orchestrator | 2025-09-19 07:02:07 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:02:07.499994 | orchestrator | 2025-09-19 07:02:07 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:07.500022 | orchestrator | 2025-09-19 07:02:07 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:02:07.502846 | orchestrator | 2025-09-19 07:02:07 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:07.507987 | orchestrator | 2025-09-19 07:02:07 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:07.509166 | orchestrator | 2025-09-19 07:02:07 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:02:07.512926 | orchestrator | 2025-09-19 07:02:07 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:07.512966 | orchestrator | 2025-09-19 07:02:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:10.632158 | orchestrator | 2025-09-19 07:02:10 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:02:10.632244 | orchestrator | 2025-09-19 07:02:10 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:10.632258 | orchestrator | 2025-09-19 07:02:10 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:02:10.632270 | orchestrator | 2025-09-19 07:02:10 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:10.632280 | orchestrator | 2025-09-19 07:02:10 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:10.632291 | orchestrator | 2025-09-19 07:02:10 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:02:10.632302 | orchestrator | 2025-09-19 07:02:10 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:10.632313 | orchestrator | 2025-09-19 07:02:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:13.628763 | orchestrator | 2025-09-19 07:02:13 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:02:13.628831 | orchestrator | 2025-09-19 07:02:13 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:13.628844 | orchestrator | 2025-09-19 07:02:13 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:02:13.628855 | orchestrator | 2025-09-19 07:02:13 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:13.628866 | orchestrator | 2025-09-19 07:02:13 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:13.629816 | orchestrator | 2025-09-19 07:02:13 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state STARTED 2025-09-19 07:02:13.631315 | orchestrator | 2025-09-19 07:02:13 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:13.631340 | orchestrator | 2025-09-19 07:02:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:16.680126 | orchestrator | 2025-09-19 07:02:16 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:02:16.680450 | orchestrator | 2025-09-19 07:02:16 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:16.682630 | orchestrator | 2025-09-19 07:02:16 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:02:16.685644 | orchestrator | 2025-09-19 07:02:16 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:16.685671 | orchestrator | 2025-09-19 07:02:16 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:16.686477 | orchestrator | 2025-09-19 07:02:16 | INFO  | Task 0efa5688-db35-4ade-8683-628daf4b0f71 is in state SUCCESS 2025-09-19 07:02:16.687843 | orchestrator | 2025-09-19 07:02:16 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:16.687871 | orchestrator | 2025-09-19 07:02:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:19.731426 | orchestrator | 2025-09-19 07:02:19 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state STARTED 2025-09-19 07:02:19.732284 | orchestrator | 2025-09-19 07:02:19 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:19.735252 | orchestrator | 2025-09-19 07:02:19 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:02:19.741412 | orchestrator | 2025-09-19 07:02:19 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:19.744105 | orchestrator | 2025-09-19 07:02:19 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:19.747555 | orchestrator | 2025-09-19 07:02:19 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:19.747606 | orchestrator | 2025-09-19 07:02:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:22.808488 | orchestrator | 2025-09-19 07:02:22 | INFO  | Task bee68f7e-b23a-4e66-aded-5776741ceb65 is in state SUCCESS 2025-09-19 07:02:22.814959 | orchestrator | 2025-09-19 07:02:22 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:22.818785 | orchestrator | 2025-09-19 07:02:22 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:02:22.819473 | orchestrator | 2025-09-19 07:02:22 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:22.820125 | orchestrator | 2025-09-19 07:02:22 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:22.822420 | orchestrator | 2025-09-19 07:02:22 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:22.822436 | orchestrator | 2025-09-19 07:02:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:25.857680 | orchestrator | 2025-09-19 07:02:25 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:25.859066 | orchestrator | 2025-09-19 07:02:25 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:02:25.859825 | orchestrator | 2025-09-19 07:02:25 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:25.861309 | orchestrator | 2025-09-19 07:02:25 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:25.862229 | orchestrator | 2025-09-19 07:02:25 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:25.862506 | orchestrator | 2025-09-19 07:02:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:28.901129 | orchestrator | 2025-09-19 07:02:28 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:28.901228 | orchestrator | 2025-09-19 07:02:28 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:02:28.901254 | orchestrator | 2025-09-19 07:02:28 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:28.901273 | orchestrator | 2025-09-19 07:02:28 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:28.904782 | orchestrator | 2025-09-19 07:02:28 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:28.904830 | orchestrator | 2025-09-19 07:02:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:31.958644 | orchestrator | 2025-09-19 07:02:31 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:31.958722 | orchestrator | 2025-09-19 07:02:31 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:02:31.959599 | orchestrator | 2025-09-19 07:02:31 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:31.961468 | orchestrator | 2025-09-19 07:02:31 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:31.963211 | orchestrator | 2025-09-19 07:02:31 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:31.963242 | orchestrator | 2025-09-19 07:02:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:35.010219 | orchestrator | 2025-09-19 07:02:35 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:35.010345 | orchestrator | 2025-09-19 07:02:35 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:02:35.011409 | orchestrator | 2025-09-19 07:02:35 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:35.011468 | orchestrator | 2025-09-19 07:02:35 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:35.011960 | orchestrator | 2025-09-19 07:02:35 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:35.011996 | orchestrator | 2025-09-19 07:02:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:38.076645 | orchestrator | 2025-09-19 07:02:38 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:38.077403 | orchestrator | 2025-09-19 07:02:38 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:02:38.079787 | orchestrator | 2025-09-19 07:02:38 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:38.082718 | orchestrator | 2025-09-19 07:02:38 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:38.084840 | orchestrator | 2025-09-19 07:02:38 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:38.084865 | orchestrator | 2025-09-19 07:02:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:41.179244 | orchestrator | 2025-09-19 07:02:41 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:41.183026 | orchestrator | 2025-09-19 07:02:41 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state STARTED 2025-09-19 07:02:41.185620 | orchestrator | 2025-09-19 07:02:41 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:41.187548 | orchestrator | 2025-09-19 07:02:41 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:41.188815 | orchestrator | 2025-09-19 07:02:41 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:41.188852 | orchestrator | 2025-09-19 07:02:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:44.248201 | orchestrator | 2025-09-19 07:02:44 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:44.249601 | orchestrator | 2025-09-19 07:02:44 | INFO  | Task 7a78062d-aed9-4cbc-98b1-2c8b8217235d is in state SUCCESS 2025-09-19 07:02:44.251134 | orchestrator | 2025-09-19 07:02:44.251185 | orchestrator | 2025-09-19 07:02:44.251197 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-19 07:02:44.251209 | orchestrator | 2025-09-19 07:02:44.251221 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-19 07:02:44.251233 | orchestrator | Friday 19 September 2025 07:01:35 +0000 (0:00:00.308) 0:00:00.308 ****** 2025-09-19 07:02:44.251244 | orchestrator | ok: [testbed-manager] => { 2025-09-19 07:02:44.251257 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-19 07:02:44.251270 | orchestrator | } 2025-09-19 07:02:44.251282 | orchestrator | 2025-09-19 07:02:44.251293 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-19 07:02:44.251325 | orchestrator | Friday 19 September 2025 07:01:35 +0000 (0:00:00.506) 0:00:00.815 ****** 2025-09-19 07:02:44.251336 | orchestrator | ok: [testbed-manager] 2025-09-19 07:02:44.251348 | orchestrator | 2025-09-19 07:02:44.251385 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-19 07:02:44.251396 | orchestrator | Friday 19 September 2025 07:01:37 +0000 (0:00:01.454) 0:00:02.269 ****** 2025-09-19 07:02:44.251407 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-19 07:02:44.251418 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-19 07:02:44.251430 | orchestrator | 2025-09-19 07:02:44.251441 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-19 07:02:44.251452 | orchestrator | Friday 19 September 2025 07:01:39 +0000 (0:00:01.740) 0:00:04.009 ****** 2025-09-19 07:02:44.251463 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:44.251474 | orchestrator | 2025-09-19 07:02:44.251485 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-19 07:02:44.251521 | orchestrator | Friday 19 September 2025 07:01:41 +0000 (0:00:02.097) 0:00:06.106 ****** 2025-09-19 07:02:44.251534 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:44.251546 | orchestrator | 2025-09-19 07:02:44.251557 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-19 07:02:44.251568 | orchestrator | Friday 19 September 2025 07:01:42 +0000 (0:00:01.780) 0:00:07.887 ****** 2025-09-19 07:02:44.251579 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-19 07:02:44.251589 | orchestrator | ok: [testbed-manager] 2025-09-19 07:02:44.251600 | orchestrator | 2025-09-19 07:02:44.251611 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-19 07:02:44.251622 | orchestrator | Friday 19 September 2025 07:02:12 +0000 (0:00:29.351) 0:00:37.238 ****** 2025-09-19 07:02:44.251633 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:44.251643 | orchestrator | 2025-09-19 07:02:44.251654 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:02:44.251665 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:02:44.251677 | orchestrator | 2025-09-19 07:02:44.251688 | orchestrator | 2025-09-19 07:02:44.251701 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:02:44.251713 | orchestrator | Friday 19 September 2025 07:02:15 +0000 (0:00:02.926) 0:00:40.164 ****** 2025-09-19 07:02:44.251726 | orchestrator | =============================================================================== 2025-09-19 07:02:44.251739 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 29.35s 2025-09-19 07:02:44.251752 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.93s 2025-09-19 07:02:44.251764 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.10s 2025-09-19 07:02:44.251776 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.78s 2025-09-19 07:02:44.251789 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.74s 2025-09-19 07:02:44.251801 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.45s 2025-09-19 07:02:44.251813 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.51s 2025-09-19 07:02:44.251826 | orchestrator | 2025-09-19 07:02:44.251838 | orchestrator | 2025-09-19 07:02:44.251850 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-19 07:02:44.251863 | orchestrator | 2025-09-19 07:02:44.251875 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-19 07:02:44.251887 | orchestrator | Friday 19 September 2025 07:01:34 +0000 (0:00:00.808) 0:00:00.808 ****** 2025-09-19 07:02:44.251900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-19 07:02:44.251921 | orchestrator | 2025-09-19 07:02:44.251933 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-19 07:02:44.251946 | orchestrator | Friday 19 September 2025 07:01:35 +0000 (0:00:01.076) 0:00:01.885 ****** 2025-09-19 07:02:44.251958 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-19 07:02:44.251971 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-19 07:02:44.251983 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-19 07:02:44.252014 | orchestrator | 2025-09-19 07:02:44.252027 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-19 07:02:44.252040 | orchestrator | Friday 19 September 2025 07:01:37 +0000 (0:00:02.294) 0:00:04.179 ****** 2025-09-19 07:02:44.252052 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:44.252063 | orchestrator | 2025-09-19 07:02:44.252074 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-19 07:02:44.252085 | orchestrator | Friday 19 September 2025 07:01:39 +0000 (0:00:01.665) 0:00:05.844 ****** 2025-09-19 07:02:44.252109 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-19 07:02:44.252120 | orchestrator | ok: [testbed-manager] 2025-09-19 07:02:44.252131 | orchestrator | 2025-09-19 07:02:44.252175 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-19 07:02:44.252188 | orchestrator | Friday 19 September 2025 07:02:12 +0000 (0:00:33.146) 0:00:38.991 ****** 2025-09-19 07:02:44.252199 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:44.252209 | orchestrator | 2025-09-19 07:02:44.252220 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-19 07:02:44.252231 | orchestrator | Friday 19 September 2025 07:02:14 +0000 (0:00:02.080) 0:00:41.072 ****** 2025-09-19 07:02:44.252241 | orchestrator | ok: [testbed-manager] 2025-09-19 07:02:44.252252 | orchestrator | 2025-09-19 07:02:44.252262 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-19 07:02:44.252273 | orchestrator | Friday 19 September 2025 07:02:15 +0000 (0:00:01.074) 0:00:42.146 ****** 2025-09-19 07:02:44.252284 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:44.252294 | orchestrator | 2025-09-19 07:02:44.252305 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-19 07:02:44.252315 | orchestrator | Friday 19 September 2025 07:02:18 +0000 (0:00:02.805) 0:00:44.952 ****** 2025-09-19 07:02:44.252326 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:44.252337 | orchestrator | 2025-09-19 07:02:44.252347 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-19 07:02:44.252388 | orchestrator | Friday 19 September 2025 07:02:19 +0000 (0:00:01.181) 0:00:46.134 ****** 2025-09-19 07:02:44.252399 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:44.252410 | orchestrator | 2025-09-19 07:02:44.252420 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-19 07:02:44.252431 | orchestrator | Friday 19 September 2025 07:02:20 +0000 (0:00:00.566) 0:00:46.702 ****** 2025-09-19 07:02:44.252441 | orchestrator | ok: [testbed-manager] 2025-09-19 07:02:44.252452 | orchestrator | 2025-09-19 07:02:44.252463 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:02:44.252473 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:02:44.252484 | orchestrator | 2025-09-19 07:02:44.252495 | orchestrator | 2025-09-19 07:02:44.252510 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:02:44.252521 | orchestrator | Friday 19 September 2025 07:02:20 +0000 (0:00:00.366) 0:00:47.069 ****** 2025-09-19 07:02:44.252532 | orchestrator | =============================================================================== 2025-09-19 07:02:44.252542 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.15s 2025-09-19 07:02:44.252561 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.81s 2025-09-19 07:02:44.252571 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.29s 2025-09-19 07:02:44.252582 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.08s 2025-09-19 07:02:44.252593 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.66s 2025-09-19 07:02:44.252603 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.18s 2025-09-19 07:02:44.252614 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.09s 2025-09-19 07:02:44.252624 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.07s 2025-09-19 07:02:44.252635 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.57s 2025-09-19 07:02:44.252645 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.37s 2025-09-19 07:02:44.252656 | orchestrator | 2025-09-19 07:02:44.252667 | orchestrator | 2025-09-19 07:02:44.252677 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:02:44.252688 | orchestrator | 2025-09-19 07:02:44.252698 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:02:44.252709 | orchestrator | Friday 19 September 2025 07:01:34 +0000 (0:00:00.853) 0:00:00.853 ****** 2025-09-19 07:02:44.252720 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-19 07:02:44.252730 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-19 07:02:44.252741 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-19 07:02:44.252751 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-19 07:02:44.252762 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-19 07:02:44.252772 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-19 07:02:44.252783 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-19 07:02:44.252794 | orchestrator | 2025-09-19 07:02:44.252804 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-19 07:02:44.252815 | orchestrator | 2025-09-19 07:02:44.252826 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-19 07:02:44.252836 | orchestrator | Friday 19 September 2025 07:01:36 +0000 (0:00:01.722) 0:00:02.576 ****** 2025-09-19 07:02:44.252860 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:02:44.252874 | orchestrator | 2025-09-19 07:02:44.252885 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-19 07:02:44.252895 | orchestrator | Friday 19 September 2025 07:01:38 +0000 (0:00:02.057) 0:00:04.634 ****** 2025-09-19 07:02:44.252906 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:02:44.252917 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:02:44.252927 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:02:44.252938 | orchestrator | ok: [testbed-manager] 2025-09-19 07:02:44.252949 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:02:44.252966 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:02:44.252977 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:02:44.252988 | orchestrator | 2025-09-19 07:02:44.252999 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-19 07:02:44.253010 | orchestrator | Friday 19 September 2025 07:01:40 +0000 (0:00:02.166) 0:00:06.801 ****** 2025-09-19 07:02:44.253020 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:02:44.253031 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:02:44.253042 | orchestrator | ok: [testbed-manager] 2025-09-19 07:02:44.253052 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:02:44.253063 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:02:44.253074 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:02:44.253084 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:02:44.253101 | orchestrator | 2025-09-19 07:02:44.253112 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-19 07:02:44.253123 | orchestrator | Friday 19 September 2025 07:01:43 +0000 (0:00:02.846) 0:00:09.647 ****** 2025-09-19 07:02:44.253133 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:02:44.253144 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:44.253155 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:02:44.253166 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:02:44.253177 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:02:44.253201 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:02:44.253212 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:02:44.253223 | orchestrator | 2025-09-19 07:02:44.253234 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-19 07:02:44.253245 | orchestrator | Friday 19 September 2025 07:01:46 +0000 (0:00:02.704) 0:00:12.352 ****** 2025-09-19 07:02:44.253255 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:02:44.253266 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:02:44.253276 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:02:44.253287 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:02:44.253298 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:02:44.253308 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:02:44.253318 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:44.253329 | orchestrator | 2025-09-19 07:02:44.253340 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-19 07:02:44.253350 | orchestrator | Friday 19 September 2025 07:01:56 +0000 (0:00:10.071) 0:00:22.424 ****** 2025-09-19 07:02:44.253409 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:02:44.253425 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:02:44.253436 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:02:44.253447 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:02:44.253458 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:02:44.253468 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:02:44.253479 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:44.253490 | orchestrator | 2025-09-19 07:02:44.253501 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-19 07:02:44.253512 | orchestrator | Friday 19 September 2025 07:02:21 +0000 (0:00:24.705) 0:00:47.130 ****** 2025-09-19 07:02:44.253524 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:02:44.253536 | orchestrator | 2025-09-19 07:02:44.253547 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-19 07:02:44.253558 | orchestrator | Friday 19 September 2025 07:02:22 +0000 (0:00:01.220) 0:00:48.350 ****** 2025-09-19 07:02:44.253569 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-19 07:02:44.253580 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-19 07:02:44.253592 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-19 07:02:44.253602 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-19 07:02:44.253613 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-19 07:02:44.253624 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-19 07:02:44.253635 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-19 07:02:44.253646 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-19 07:02:44.253657 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-19 07:02:44.253667 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-19 07:02:44.253678 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-19 07:02:44.253689 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-19 07:02:44.253700 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-19 07:02:44.253717 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-19 07:02:44.253728 | orchestrator | 2025-09-19 07:02:44.253739 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-19 07:02:44.253750 | orchestrator | Friday 19 September 2025 07:02:26 +0000 (0:00:04.257) 0:00:52.607 ****** 2025-09-19 07:02:44.253761 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:02:44.253772 | orchestrator | ok: [testbed-manager] 2025-09-19 07:02:44.253783 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:02:44.253794 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:02:44.253805 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:02:44.253816 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:02:44.253826 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:02:44.253837 | orchestrator | 2025-09-19 07:02:44.253848 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-19 07:02:44.253859 | orchestrator | Friday 19 September 2025 07:02:27 +0000 (0:00:01.081) 0:00:53.690 ****** 2025-09-19 07:02:44.253870 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:02:44.253881 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:02:44.253891 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:44.253902 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:02:44.253912 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:02:44.253923 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:02:44.253934 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:02:44.253944 | orchestrator | 2025-09-19 07:02:44.253955 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-19 07:02:44.253973 | orchestrator | Friday 19 September 2025 07:02:29 +0000 (0:00:01.999) 0:00:55.689 ****** 2025-09-19 07:02:44.253985 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:02:44.253996 | orchestrator | ok: [testbed-manager] 2025-09-19 07:02:44.254006 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:02:44.254085 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:02:44.254100 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:02:44.254111 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:02:44.254122 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:02:44.254132 | orchestrator | 2025-09-19 07:02:44.254143 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-19 07:02:44.254154 | orchestrator | Friday 19 September 2025 07:02:32 +0000 (0:00:02.592) 0:00:58.281 ****** 2025-09-19 07:02:44.254165 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:02:44.254176 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:02:44.254187 | orchestrator | ok: [testbed-manager] 2025-09-19 07:02:44.254198 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:02:44.254208 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:02:44.254219 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:02:44.254230 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:02:44.254241 | orchestrator | 2025-09-19 07:02:44.254252 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-19 07:02:44.254263 | orchestrator | Friday 19 September 2025 07:02:34 +0000 (0:00:02.635) 0:01:00.917 ****** 2025-09-19 07:02:44.254274 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-19 07:02:44.254287 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:02:44.254298 | orchestrator | 2025-09-19 07:02:44.254309 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-19 07:02:44.254319 | orchestrator | Friday 19 September 2025 07:02:36 +0000 (0:00:01.373) 0:01:02.291 ****** 2025-09-19 07:02:44.254330 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:44.254341 | orchestrator | 2025-09-19 07:02:44.254352 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-19 07:02:44.254380 | orchestrator | Friday 19 September 2025 07:02:38 +0000 (0:00:02.128) 0:01:04.419 ****** 2025-09-19 07:02:44.254401 | orchestrator | changed: [testbed-manager] 2025-09-19 07:02:44.254417 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:02:44.254428 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:02:44.254439 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:02:44.254449 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:02:44.254460 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:02:44.254471 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:02:44.254482 | orchestrator | 2025-09-19 07:02:44.254493 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:02:44.254503 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:02:44.254514 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:02:44.254525 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:02:44.254536 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:02:44.254547 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:02:44.254558 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:02:44.254569 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:02:44.254580 | orchestrator | 2025-09-19 07:02:44.254591 | orchestrator | 2025-09-19 07:02:44.254602 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:02:44.254613 | orchestrator | Friday 19 September 2025 07:02:41 +0000 (0:00:03.473) 0:01:07.892 ****** 2025-09-19 07:02:44.254624 | orchestrator | =============================================================================== 2025-09-19 07:02:44.254634 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 24.71s 2025-09-19 07:02:44.254645 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.07s 2025-09-19 07:02:44.254656 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.26s 2025-09-19 07:02:44.254667 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.47s 2025-09-19 07:02:44.254678 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.85s 2025-09-19 07:02:44.254688 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.70s 2025-09-19 07:02:44.254699 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.64s 2025-09-19 07:02:44.254710 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.59s 2025-09-19 07:02:44.254721 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.17s 2025-09-19 07:02:44.254731 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.13s 2025-09-19 07:02:44.254742 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.06s 2025-09-19 07:02:44.254760 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.00s 2025-09-19 07:02:44.254771 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.72s 2025-09-19 07:02:44.254782 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.37s 2025-09-19 07:02:44.254793 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.22s 2025-09-19 07:02:44.254804 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.08s 2025-09-19 07:02:44.254815 | orchestrator | 2025-09-19 07:02:44 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:44.254832 | orchestrator | 2025-09-19 07:02:44 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:44.255151 | orchestrator | 2025-09-19 07:02:44 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:44.255243 | orchestrator | 2025-09-19 07:02:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:47.283606 | orchestrator | 2025-09-19 07:02:47 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:47.283677 | orchestrator | 2025-09-19 07:02:47 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:47.285340 | orchestrator | 2025-09-19 07:02:47 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:47.286560 | orchestrator | 2025-09-19 07:02:47 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:47.286572 | orchestrator | 2025-09-19 07:02:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:50.337191 | orchestrator | 2025-09-19 07:02:50 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:50.338267 | orchestrator | 2025-09-19 07:02:50 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:50.339736 | orchestrator | 2025-09-19 07:02:50 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state STARTED 2025-09-19 07:02:50.343191 | orchestrator | 2025-09-19 07:02:50 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:50.343261 | orchestrator | 2025-09-19 07:02:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:53.398584 | orchestrator | 2025-09-19 07:02:53 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:53.401782 | orchestrator | 2025-09-19 07:02:53 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:53.401814 | orchestrator | 2025-09-19 07:02:53 | INFO  | Task 55b381b4-5a83-407f-b39c-67960ddaccea is in state SUCCESS 2025-09-19 07:02:53.404452 | orchestrator | 2025-09-19 07:02:53 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:53.404479 | orchestrator | 2025-09-19 07:02:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:56.445668 | orchestrator | 2025-09-19 07:02:56 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:56.446784 | orchestrator | 2025-09-19 07:02:56 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:56.449385 | orchestrator | 2025-09-19 07:02:56 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:56.449664 | orchestrator | 2025-09-19 07:02:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:02:59.497598 | orchestrator | 2025-09-19 07:02:59 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:02:59.498128 | orchestrator | 2025-09-19 07:02:59 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:02:59.501007 | orchestrator | 2025-09-19 07:02:59 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:02:59.501041 | orchestrator | 2025-09-19 07:02:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:02.546512 | orchestrator | 2025-09-19 07:03:02 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:02.546742 | orchestrator | 2025-09-19 07:03:02 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:02.548034 | orchestrator | 2025-09-19 07:03:02 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:02.548066 | orchestrator | 2025-09-19 07:03:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:05.605820 | orchestrator | 2025-09-19 07:03:05 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:05.607458 | orchestrator | 2025-09-19 07:03:05 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:05.610570 | orchestrator | 2025-09-19 07:03:05 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:05.611040 | orchestrator | 2025-09-19 07:03:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:08.668572 | orchestrator | 2025-09-19 07:03:08 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:08.673334 | orchestrator | 2025-09-19 07:03:08 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:08.677429 | orchestrator | 2025-09-19 07:03:08 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:08.677742 | orchestrator | 2025-09-19 07:03:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:11.802903 | orchestrator | 2025-09-19 07:03:11 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:11.803004 | orchestrator | 2025-09-19 07:03:11 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:11.804696 | orchestrator | 2025-09-19 07:03:11 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:11.804731 | orchestrator | 2025-09-19 07:03:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:14.860081 | orchestrator | 2025-09-19 07:03:14 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:14.861643 | orchestrator | 2025-09-19 07:03:14 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:14.865124 | orchestrator | 2025-09-19 07:03:14 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:14.865167 | orchestrator | 2025-09-19 07:03:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:17.914003 | orchestrator | 2025-09-19 07:03:17 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:17.916050 | orchestrator | 2025-09-19 07:03:17 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:17.918145 | orchestrator | 2025-09-19 07:03:17 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:17.918745 | orchestrator | 2025-09-19 07:03:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:20.953918 | orchestrator | 2025-09-19 07:03:20 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:20.955044 | orchestrator | 2025-09-19 07:03:20 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:20.956450 | orchestrator | 2025-09-19 07:03:20 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:20.956497 | orchestrator | 2025-09-19 07:03:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:24.004004 | orchestrator | 2025-09-19 07:03:24 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:24.011489 | orchestrator | 2025-09-19 07:03:24 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:24.017653 | orchestrator | 2025-09-19 07:03:24 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:24.017739 | orchestrator | 2025-09-19 07:03:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:27.059108 | orchestrator | 2025-09-19 07:03:27 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:27.061306 | orchestrator | 2025-09-19 07:03:27 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:27.062763 | orchestrator | 2025-09-19 07:03:27 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:27.062885 | orchestrator | 2025-09-19 07:03:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:30.108737 | orchestrator | 2025-09-19 07:03:30 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:30.108861 | orchestrator | 2025-09-19 07:03:30 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:30.109685 | orchestrator | 2025-09-19 07:03:30 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:30.109826 | orchestrator | 2025-09-19 07:03:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:33.185140 | orchestrator | 2025-09-19 07:03:33 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:33.186504 | orchestrator | 2025-09-19 07:03:33 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:33.187867 | orchestrator | 2025-09-19 07:03:33 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:33.187892 | orchestrator | 2025-09-19 07:03:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:36.234662 | orchestrator | 2025-09-19 07:03:36 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:36.237250 | orchestrator | 2025-09-19 07:03:36 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:36.239219 | orchestrator | 2025-09-19 07:03:36 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:36.239400 | orchestrator | 2025-09-19 07:03:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:39.275135 | orchestrator | 2025-09-19 07:03:39 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:39.277057 | orchestrator | 2025-09-19 07:03:39 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:39.277756 | orchestrator | 2025-09-19 07:03:39 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:39.277950 | orchestrator | 2025-09-19 07:03:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:42.316651 | orchestrator | 2025-09-19 07:03:42 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:42.316747 | orchestrator | 2025-09-19 07:03:42 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:42.317041 | orchestrator | 2025-09-19 07:03:42 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:42.317064 | orchestrator | 2025-09-19 07:03:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:45.358683 | orchestrator | 2025-09-19 07:03:45 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state STARTED 2025-09-19 07:03:45.360361 | orchestrator | 2025-09-19 07:03:45 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:45.361557 | orchestrator | 2025-09-19 07:03:45 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:45.361590 | orchestrator | 2025-09-19 07:03:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:48.419067 | orchestrator | 2025-09-19 07:03:48.419169 | orchestrator | 2025-09-19 07:03:48.419184 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-19 07:03:48.419197 | orchestrator | 2025-09-19 07:03:48.419208 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-19 07:03:48.419220 | orchestrator | Friday 19 September 2025 07:01:52 +0000 (0:00:00.206) 0:00:00.206 ****** 2025-09-19 07:03:48.419231 | orchestrator | ok: [testbed-manager] 2025-09-19 07:03:48.419243 | orchestrator | 2025-09-19 07:03:48.419255 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-19 07:03:48.419265 | orchestrator | Friday 19 September 2025 07:01:52 +0000 (0:00:00.749) 0:00:00.955 ****** 2025-09-19 07:03:48.419277 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-19 07:03:48.419288 | orchestrator | 2025-09-19 07:03:48.419299 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-19 07:03:48.419310 | orchestrator | Friday 19 September 2025 07:01:53 +0000 (0:00:00.769) 0:00:01.725 ****** 2025-09-19 07:03:48.419321 | orchestrator | changed: [testbed-manager] 2025-09-19 07:03:48.419400 | orchestrator | 2025-09-19 07:03:48.419412 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-19 07:03:48.419423 | orchestrator | Friday 19 September 2025 07:01:54 +0000 (0:00:01.019) 0:00:02.745 ****** 2025-09-19 07:03:48.419434 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-19 07:03:48.419447 | orchestrator | ok: [testbed-manager] 2025-09-19 07:03:48.419458 | orchestrator | 2025-09-19 07:03:48.419469 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-19 07:03:48.419480 | orchestrator | Friday 19 September 2025 07:02:39 +0000 (0:00:45.326) 0:00:48.072 ****** 2025-09-19 07:03:48.419491 | orchestrator | changed: [testbed-manager] 2025-09-19 07:03:48.419502 | orchestrator | 2025-09-19 07:03:48.419512 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:03:48.419524 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:03:48.419537 | orchestrator | 2025-09-19 07:03:48.419548 | orchestrator | 2025-09-19 07:03:48.419559 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:03:48.419571 | orchestrator | Friday 19 September 2025 07:02:51 +0000 (0:00:11.555) 0:00:59.627 ****** 2025-09-19 07:03:48.419583 | orchestrator | =============================================================================== 2025-09-19 07:03:48.419594 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 45.33s 2025-09-19 07:03:48.419605 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 11.56s 2025-09-19 07:03:48.419618 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.02s 2025-09-19 07:03:48.419631 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.77s 2025-09-19 07:03:48.419644 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.75s 2025-09-19 07:03:48.419656 | orchestrator | 2025-09-19 07:03:48.419670 | orchestrator | 2025-09-19 07:03:48 | INFO  | Task ad6ec7cd-7459-4f10-a2b3-16932db5cad1 is in state SUCCESS 2025-09-19 07:03:48.422266 | orchestrator | 2025-09-19 07:03:48.422396 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-19 07:03:48.422422 | orchestrator | 2025-09-19 07:03:48.422435 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-19 07:03:48.422447 | orchestrator | Friday 19 September 2025 07:01:26 +0000 (0:00:00.216) 0:00:00.216 ****** 2025-09-19 07:03:48.422459 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:03:48.422472 | orchestrator | 2025-09-19 07:03:48.422483 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-19 07:03:48.422529 | orchestrator | Friday 19 September 2025 07:01:27 +0000 (0:00:01.150) 0:00:01.367 ****** 2025-09-19 07:03:48.422541 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 07:03:48.422552 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 07:03:48.422563 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 07:03:48.422574 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 07:03:48.422601 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 07:03:48.422612 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 07:03:48.422623 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 07:03:48.422634 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 07:03:48.422649 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 07:03:48.422660 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 07:03:48.422670 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 07:03:48.422682 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 07:03:48.422693 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 07:03:48.422704 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 07:03:48.422715 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 07:03:48.422726 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 07:03:48.422736 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 07:03:48.422747 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 07:03:48.422757 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 07:03:48.422768 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 07:03:48.422779 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 07:03:48.422790 | orchestrator | 2025-09-19 07:03:48.422800 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-19 07:03:48.422811 | orchestrator | Friday 19 September 2025 07:01:32 +0000 (0:00:04.378) 0:00:05.746 ****** 2025-09-19 07:03:48.422822 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:03:48.422834 | orchestrator | 2025-09-19 07:03:48.422845 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-19 07:03:48.422855 | orchestrator | Friday 19 September 2025 07:01:33 +0000 (0:00:01.514) 0:00:07.260 ****** 2025-09-19 07:03:48.422870 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.422886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.422948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.422963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.422980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.422992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.423005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.423016 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.423028 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.423073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.423086 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.423099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.423127 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.423139 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.423151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.423162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.423182 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.423213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.423225 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.423237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.423252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.423264 | orchestrator | 2025-09-19 07:03:48.423276 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-19 07:03:48.423287 | orchestrator | Friday 19 September 2025 07:01:39 +0000 (0:00:05.937) 0:00:13.198 ****** 2025-09-19 07:03:48.423298 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:03:48.423311 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423322 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423367 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:03:48.423380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:03:48.423410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:03:48.423451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423473 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:03:48.423485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:03:48.423502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423526 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:03:48.423551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:03:48.423563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423591 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:03:48.423602 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:03:48.423613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:03:48.423625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423653 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:03:48.423664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:03:48.423685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423708 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:03:48.423720 | orchestrator | 2025-09-19 07:03:48.423731 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-19 07:03:48.423742 | orchestrator | Friday 19 September 2025 07:01:41 +0000 (0:00:01.650) 0:00:14.848 ****** 2025-09-19 07:03:48.423753 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:03:48.423765 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423776 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:03:48.423811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423834 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:03:48.423845 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:03:48.423869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:03:48.423881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:03:48.423932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.423955 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:03:48.423966 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:03:48.423977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:03:48.423994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.424005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.424016 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:03:48.424028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:03:48.424044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.424062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.424073 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:03:48.424084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 07:03:48.424095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.424107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.424118 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:03:48.424129 | orchestrator | 2025-09-19 07:03:48.424140 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-19 07:03:48.424151 | orchestrator | Friday 19 September 2025 07:01:44 +0000 (0:00:03.034) 0:00:17.883 ****** 2025-09-19 07:03:48.424162 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:03:48.424173 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:03:48.424183 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:03:48.424195 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:03:48.424205 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:03:48.424222 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:03:48.424233 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:03:48.424244 | orchestrator | 2025-09-19 07:03:48.424255 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-19 07:03:48.424266 | orchestrator | Friday 19 September 2025 07:01:45 +0000 (0:00:01.177) 0:00:19.061 ****** 2025-09-19 07:03:48.424277 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:03:48.424288 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:03:48.424299 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:03:48.424309 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:03:48.424320 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:03:48.424354 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:03:48.424366 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:03:48.424376 | orchestrator | 2025-09-19 07:03:48.424388 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-19 07:03:48.424399 | orchestrator | Friday 19 September 2025 07:01:46 +0000 (0:00:00.879) 0:00:19.940 ****** 2025-09-19 07:03:48.424417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.424434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.424446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.424458 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.424470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.424481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.424499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.424511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.424537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.424549 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.424560 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.424572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.424583 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.424602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.424614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.424632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.424649 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.424661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.424672 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.424683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.424694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.424705 | orchestrator | 2025-09-19 07:03:48.424716 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-19 07:03:48.424728 | orchestrator | Friday 19 September 2025 07:01:52 +0000 (0:00:05.801) 0:00:25.742 ****** 2025-09-19 07:03:48.424739 | orchestrator | [WARNING]: Skipped 2025-09-19 07:03:48.424751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-19 07:03:48.424762 | orchestrator | to this access issue: 2025-09-19 07:03:48.424773 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-19 07:03:48.424784 | orchestrator | directory 2025-09-19 07:03:48.424795 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:03:48.424806 | orchestrator | 2025-09-19 07:03:48.424817 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-19 07:03:48.424834 | orchestrator | Friday 19 September 2025 07:01:53 +0000 (0:00:00.938) 0:00:26.680 ****** 2025-09-19 07:03:48.424845 | orchestrator | [WARNING]: Skipped 2025-09-19 07:03:48.424856 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-19 07:03:48.424873 | orchestrator | to this access issue: 2025-09-19 07:03:48.424884 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-19 07:03:48.424895 | orchestrator | directory 2025-09-19 07:03:48.424906 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:03:48.424917 | orchestrator | 2025-09-19 07:03:48.424928 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-19 07:03:48.424938 | orchestrator | Friday 19 September 2025 07:01:54 +0000 (0:00:00.929) 0:00:27.610 ****** 2025-09-19 07:03:48.424949 | orchestrator | [WARNING]: Skipped 2025-09-19 07:03:48.424960 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-19 07:03:48.424971 | orchestrator | to this access issue: 2025-09-19 07:03:48.424982 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-19 07:03:48.424993 | orchestrator | directory 2025-09-19 07:03:48.425004 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:03:48.425015 | orchestrator | 2025-09-19 07:03:48.425025 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-19 07:03:48.425036 | orchestrator | Friday 19 September 2025 07:01:55 +0000 (0:00:01.032) 0:00:28.643 ****** 2025-09-19 07:03:48.425047 | orchestrator | [WARNING]: Skipped 2025-09-19 07:03:48.425058 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-19 07:03:48.425069 | orchestrator | to this access issue: 2025-09-19 07:03:48.425080 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-19 07:03:48.425091 | orchestrator | directory 2025-09-19 07:03:48.425102 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:03:48.425113 | orchestrator | 2025-09-19 07:03:48.425124 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-19 07:03:48.425135 | orchestrator | Friday 19 September 2025 07:01:56 +0000 (0:00:01.171) 0:00:29.814 ****** 2025-09-19 07:03:48.425145 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:03:48.425160 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:03:48.425172 | orchestrator | changed: [testbed-manager] 2025-09-19 07:03:48.425182 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:03:48.425193 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:03:48.425204 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:03:48.425215 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:03:48.425225 | orchestrator | 2025-09-19 07:03:48.425236 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-19 07:03:48.425247 | orchestrator | Friday 19 September 2025 07:02:00 +0000 (0:00:03.794) 0:00:33.608 ****** 2025-09-19 07:03:48.425258 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 07:03:48.425269 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 07:03:48.425281 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 07:03:48.425292 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 07:03:48.425303 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 07:03:48.425313 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 07:03:48.425373 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 07:03:48.425388 | orchestrator | 2025-09-19 07:03:48.425399 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-19 07:03:48.425418 | orchestrator | Friday 19 September 2025 07:02:04 +0000 (0:00:04.085) 0:00:37.694 ****** 2025-09-19 07:03:48.425429 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:03:48.425440 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:03:48.425451 | orchestrator | changed: [testbed-manager] 2025-09-19 07:03:48.425461 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:03:48.425472 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:03:48.425483 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:03:48.425493 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:03:48.425504 | orchestrator | 2025-09-19 07:03:48.425515 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-19 07:03:48.425526 | orchestrator | Friday 19 September 2025 07:02:08 +0000 (0:00:04.078) 0:00:41.772 ****** 2025-09-19 07:03:48.425537 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.425555 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.425568 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.425579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.425599 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.425611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.425629 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.425640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.425652 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.425670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.425682 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.425693 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.425705 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.425716 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.425734 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.425746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.425757 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.425775 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.425786 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.425803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:03:48.425819 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.425837 | orchestrator | 2025-09-19 07:03:48.425848 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-19 07:03:48.425859 | orchestrator | Friday 19 September 2025 07:02:10 +0000 (0:00:02.417) 0:00:44.189 ****** 2025-09-19 07:03:48.425870 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 07:03:48.425881 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 07:03:48.425892 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 07:03:48.425903 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 07:03:48.425913 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 07:03:48.425924 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 07:03:48.425935 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 07:03:48.425945 | orchestrator | 2025-09-19 07:03:48.425956 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-19 07:03:48.425967 | orchestrator | Friday 19 September 2025 07:02:14 +0000 (0:00:03.731) 0:00:47.921 ****** 2025-09-19 07:03:48.425978 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 07:03:48.425989 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 07:03:48.426000 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 07:03:48.426011 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 07:03:48.426061 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 07:03:48.426072 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 07:03:48.426083 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 07:03:48.426094 | orchestrator | 2025-09-19 07:03:48.426105 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-19 07:03:48.426115 | orchestrator | Friday 19 September 2025 07:02:17 +0000 (0:00:02.816) 0:00:50.738 ****** 2025-09-19 07:03:48.426126 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.426146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.426158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.426182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.426193 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.426205 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.426216 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.426227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.426245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 07:03:48.426256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.426274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.426289 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.426301 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.426313 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.426348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.426371 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.426400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.426417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.426436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.426453 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.426464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:03:48.426475 | orchestrator | 2025-09-19 07:03:48.426486 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-19 07:03:48.426497 | orchestrator | Friday 19 September 2025 07:02:21 +0000 (0:00:03.954) 0:00:54.693 ****** 2025-09-19 07:03:48.426508 | orchestrator | changed: [testbed-manager] 2025-09-19 07:03:48.426519 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:03:48.426529 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:03:48.426540 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:03:48.426551 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:03:48.426561 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:03:48.426572 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:03:48.426583 | orchestrator | 2025-09-19 07:03:48.426593 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-19 07:03:48.426604 | orchestrator | Friday 19 September 2025 07:02:22 +0000 (0:00:01.517) 0:00:56.210 ****** 2025-09-19 07:03:48.426615 | orchestrator | changed: [testbed-manager] 2025-09-19 07:03:48.426625 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:03:48.426636 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:03:48.426646 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:03:48.426657 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:03:48.426667 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:03:48.426678 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:03:48.426688 | orchestrator | 2025-09-19 07:03:48.426699 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 07:03:48.426710 | orchestrator | Friday 19 September 2025 07:02:23 +0000 (0:00:01.387) 0:00:57.598 ****** 2025-09-19 07:03:48.426721 | orchestrator | 2025-09-19 07:03:48.426731 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 07:03:48.426742 | orchestrator | Friday 19 September 2025 07:02:24 +0000 (0:00:00.083) 0:00:57.681 ****** 2025-09-19 07:03:48.426753 | orchestrator | 2025-09-19 07:03:48.426763 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 07:03:48.426774 | orchestrator | Friday 19 September 2025 07:02:24 +0000 (0:00:00.066) 0:00:57.748 ****** 2025-09-19 07:03:48.426785 | orchestrator | 2025-09-19 07:03:48.426796 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 07:03:48.426806 | orchestrator | Friday 19 September 2025 07:02:24 +0000 (0:00:00.087) 0:00:57.842 ****** 2025-09-19 07:03:48.426823 | orchestrator | 2025-09-19 07:03:48.426834 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 07:03:48.426844 | orchestrator | Friday 19 September 2025 07:02:24 +0000 (0:00:00.205) 0:00:58.048 ****** 2025-09-19 07:03:48.426855 | orchestrator | 2025-09-19 07:03:48.426866 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 07:03:48.426876 | orchestrator | Friday 19 September 2025 07:02:24 +0000 (0:00:00.067) 0:00:58.115 ****** 2025-09-19 07:03:48.426887 | orchestrator | 2025-09-19 07:03:48.426897 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 07:03:48.426908 | orchestrator | Friday 19 September 2025 07:02:24 +0000 (0:00:00.058) 0:00:58.174 ****** 2025-09-19 07:03:48.426919 | orchestrator | 2025-09-19 07:03:48.426930 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-19 07:03:48.426947 | orchestrator | Friday 19 September 2025 07:02:24 +0000 (0:00:00.080) 0:00:58.255 ****** 2025-09-19 07:03:48.426958 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:03:48.426969 | orchestrator | changed: [testbed-manager] 2025-09-19 07:03:48.426979 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:03:48.426990 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:03:48.427000 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:03:48.427011 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:03:48.427021 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:03:48.427032 | orchestrator | 2025-09-19 07:03:48.427043 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-19 07:03:48.427054 | orchestrator | Friday 19 September 2025 07:03:04 +0000 (0:00:39.403) 0:01:37.659 ****** 2025-09-19 07:03:48.427064 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:03:48.427075 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:03:48.427085 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:03:48.427096 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:03:48.427106 | orchestrator | changed: [testbed-manager] 2025-09-19 07:03:48.427117 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:03:48.427127 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:03:48.427138 | orchestrator | 2025-09-19 07:03:48.427149 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-19 07:03:48.427160 | orchestrator | Friday 19 September 2025 07:03:36 +0000 (0:00:32.666) 0:02:10.325 ****** 2025-09-19 07:03:48.427170 | orchestrator | ok: [testbed-manager] 2025-09-19 07:03:48.427181 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:03:48.427192 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:03:48.427202 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:03:48.427213 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:03:48.427224 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:03:48.427234 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:03:48.427245 | orchestrator | 2025-09-19 07:03:48.427256 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-19 07:03:48.427267 | orchestrator | Friday 19 September 2025 07:03:38 +0000 (0:00:01.978) 0:02:12.304 ****** 2025-09-19 07:03:48.427277 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:03:48.427288 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:03:48.427303 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:03:48.427314 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:03:48.427345 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:03:48.427357 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:03:48.427368 | orchestrator | changed: [testbed-manager] 2025-09-19 07:03:48.427379 | orchestrator | 2025-09-19 07:03:48.427390 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:03:48.427401 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 07:03:48.427412 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 07:03:48.427430 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 07:03:48.427441 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 07:03:48.427452 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 07:03:48.427463 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 07:03:48.427473 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 07:03:48.427484 | orchestrator | 2025-09-19 07:03:48.427495 | orchestrator | 2025-09-19 07:03:48.427506 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:03:48.427516 | orchestrator | Friday 19 September 2025 07:03:47 +0000 (0:00:08.388) 0:02:20.692 ****** 2025-09-19 07:03:48.427527 | orchestrator | =============================================================================== 2025-09-19 07:03:48.427538 | orchestrator | common : Restart fluentd container ------------------------------------- 39.40s 2025-09-19 07:03:48.427549 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.67s 2025-09-19 07:03:48.427559 | orchestrator | common : Restart cron container ----------------------------------------- 8.39s 2025-09-19 07:03:48.427570 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.94s 2025-09-19 07:03:48.427581 | orchestrator | common : Copying over config.json files for services -------------------- 5.80s 2025-09-19 07:03:48.427591 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.38s 2025-09-19 07:03:48.427602 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.09s 2025-09-19 07:03:48.427613 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.08s 2025-09-19 07:03:48.427623 | orchestrator | common : Check common containers ---------------------------------------- 3.96s 2025-09-19 07:03:48.427634 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.79s 2025-09-19 07:03:48.427644 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.73s 2025-09-19 07:03:48.427655 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.03s 2025-09-19 07:03:48.427666 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.82s 2025-09-19 07:03:48.427676 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.42s 2025-09-19 07:03:48.427693 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.98s 2025-09-19 07:03:48.427704 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.65s 2025-09-19 07:03:48.427715 | orchestrator | common : Creating log volume -------------------------------------------- 1.52s 2025-09-19 07:03:48.427726 | orchestrator | common : include_tasks -------------------------------------------------- 1.51s 2025-09-19 07:03:48.427736 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.39s 2025-09-19 07:03:48.427747 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.18s 2025-09-19 07:03:48.427758 | orchestrator | 2025-09-19 07:03:48 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:48.427769 | orchestrator | 2025-09-19 07:03:48 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:48.427780 | orchestrator | 2025-09-19 07:03:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:51.469840 | orchestrator | 2025-09-19 07:03:51 | INFO  | Task e4f538a5-6878-4d7c-b47e-29b656744f5a is in state STARTED 2025-09-19 07:03:51.473731 | orchestrator | 2025-09-19 07:03:51 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:03:51.474417 | orchestrator | 2025-09-19 07:03:51 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:03:51.475427 | orchestrator | 2025-09-19 07:03:51 | INFO  | Task 6c0ea851-b1d5-44b0-88d8-3e5f87fdbafd is in state STARTED 2025-09-19 07:03:51.476293 | orchestrator | 2025-09-19 07:03:51 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:51.477011 | orchestrator | 2025-09-19 07:03:51 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:51.477034 | orchestrator | 2025-09-19 07:03:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:54.504463 | orchestrator | 2025-09-19 07:03:54 | INFO  | Task e4f538a5-6878-4d7c-b47e-29b656744f5a is in state STARTED 2025-09-19 07:03:54.504783 | orchestrator | 2025-09-19 07:03:54 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:03:54.505408 | orchestrator | 2025-09-19 07:03:54 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:03:54.506099 | orchestrator | 2025-09-19 07:03:54 | INFO  | Task 6c0ea851-b1d5-44b0-88d8-3e5f87fdbafd is in state STARTED 2025-09-19 07:03:54.506543 | orchestrator | 2025-09-19 07:03:54 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:54.507188 | orchestrator | 2025-09-19 07:03:54 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:54.507210 | orchestrator | 2025-09-19 07:03:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:03:57.545601 | orchestrator | 2025-09-19 07:03:57 | INFO  | Task e4f538a5-6878-4d7c-b47e-29b656744f5a is in state STARTED 2025-09-19 07:03:57.545678 | orchestrator | 2025-09-19 07:03:57 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:03:57.546120 | orchestrator | 2025-09-19 07:03:57 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:03:57.546784 | orchestrator | 2025-09-19 07:03:57 | INFO  | Task 6c0ea851-b1d5-44b0-88d8-3e5f87fdbafd is in state STARTED 2025-09-19 07:03:57.547493 | orchestrator | 2025-09-19 07:03:57 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:03:57.548108 | orchestrator | 2025-09-19 07:03:57 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:03:57.548130 | orchestrator | 2025-09-19 07:03:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:00.571849 | orchestrator | 2025-09-19 07:04:00 | INFO  | Task e4f538a5-6878-4d7c-b47e-29b656744f5a is in state STARTED 2025-09-19 07:04:00.572169 | orchestrator | 2025-09-19 07:04:00 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:00.573002 | orchestrator | 2025-09-19 07:04:00 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:00.573933 | orchestrator | 2025-09-19 07:04:00 | INFO  | Task 6c0ea851-b1d5-44b0-88d8-3e5f87fdbafd is in state STARTED 2025-09-19 07:04:00.575573 | orchestrator | 2025-09-19 07:04:00 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:00.576072 | orchestrator | 2025-09-19 07:04:00 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:00.576099 | orchestrator | 2025-09-19 07:04:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:03.622578 | orchestrator | 2025-09-19 07:04:03 | INFO  | Task e4f538a5-6878-4d7c-b47e-29b656744f5a is in state STARTED 2025-09-19 07:04:03.624361 | orchestrator | 2025-09-19 07:04:03 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:03.626183 | orchestrator | 2025-09-19 07:04:03 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:03.628141 | orchestrator | 2025-09-19 07:04:03 | INFO  | Task 6c0ea851-b1d5-44b0-88d8-3e5f87fdbafd is in state STARTED 2025-09-19 07:04:03.630065 | orchestrator | 2025-09-19 07:04:03 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:03.632159 | orchestrator | 2025-09-19 07:04:03 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:03.632590 | orchestrator | 2025-09-19 07:04:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:06.654465 | orchestrator | 2025-09-19 07:04:06 | INFO  | Task e4f538a5-6878-4d7c-b47e-29b656744f5a is in state STARTED 2025-09-19 07:04:06.654963 | orchestrator | 2025-09-19 07:04:06 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:06.656652 | orchestrator | 2025-09-19 07:04:06 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:06.657877 | orchestrator | 2025-09-19 07:04:06 | INFO  | Task 6c0ea851-b1d5-44b0-88d8-3e5f87fdbafd is in state STARTED 2025-09-19 07:04:06.658790 | orchestrator | 2025-09-19 07:04:06 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:06.659593 | orchestrator | 2025-09-19 07:04:06 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:06.659617 | orchestrator | 2025-09-19 07:04:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:09.684811 | orchestrator | 2025-09-19 07:04:09 | INFO  | Task e4f538a5-6878-4d7c-b47e-29b656744f5a is in state SUCCESS 2025-09-19 07:04:09.685806 | orchestrator | 2025-09-19 07:04:09 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:09.686417 | orchestrator | 2025-09-19 07:04:09 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:09.689067 | orchestrator | 2025-09-19 07:04:09 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:09.689846 | orchestrator | 2025-09-19 07:04:09 | INFO  | Task 6c0ea851-b1d5-44b0-88d8-3e5f87fdbafd is in state STARTED 2025-09-19 07:04:09.690902 | orchestrator | 2025-09-19 07:04:09 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:09.692418 | orchestrator | 2025-09-19 07:04:09 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:09.692441 | orchestrator | 2025-09-19 07:04:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:12.737277 | orchestrator | 2025-09-19 07:04:12 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:12.737406 | orchestrator | 2025-09-19 07:04:12 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:12.737950 | orchestrator | 2025-09-19 07:04:12 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:12.738695 | orchestrator | 2025-09-19 07:04:12 | INFO  | Task 6c0ea851-b1d5-44b0-88d8-3e5f87fdbafd is in state STARTED 2025-09-19 07:04:12.739025 | orchestrator | 2025-09-19 07:04:12 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:12.739746 | orchestrator | 2025-09-19 07:04:12 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:12.739769 | orchestrator | 2025-09-19 07:04:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:15.938588 | orchestrator | 2025-09-19 07:04:15 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:15.938671 | orchestrator | 2025-09-19 07:04:15 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:15.938848 | orchestrator | 2025-09-19 07:04:15 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:15.939820 | orchestrator | 2025-09-19 07:04:15 | INFO  | Task 6c0ea851-b1d5-44b0-88d8-3e5f87fdbafd is in state STARTED 2025-09-19 07:04:15.942499 | orchestrator | 2025-09-19 07:04:15 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:15.942534 | orchestrator | 2025-09-19 07:04:15 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:15.942546 | orchestrator | 2025-09-19 07:04:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:18.999891 | orchestrator | 2025-09-19 07:04:18 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:18.999975 | orchestrator | 2025-09-19 07:04:18 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:19.000453 | orchestrator | 2025-09-19 07:04:19 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:19.004661 | orchestrator | 2025-09-19 07:04:19 | INFO  | Task 6c0ea851-b1d5-44b0-88d8-3e5f87fdbafd is in state STARTED 2025-09-19 07:04:19.005367 | orchestrator | 2025-09-19 07:04:19 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:19.006101 | orchestrator | 2025-09-19 07:04:19 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:19.006125 | orchestrator | 2025-09-19 07:04:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:22.059120 | orchestrator | 2025-09-19 07:04:22 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:22.059845 | orchestrator | 2025-09-19 07:04:22 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:22.059875 | orchestrator | 2025-09-19 07:04:22 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:22.060719 | orchestrator | 2025-09-19 07:04:22 | INFO  | Task 6c0ea851-b1d5-44b0-88d8-3e5f87fdbafd is in state STARTED 2025-09-19 07:04:22.061522 | orchestrator | 2025-09-19 07:04:22 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:22.062519 | orchestrator | 2025-09-19 07:04:22 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:22.062569 | orchestrator | 2025-09-19 07:04:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:25.112983 | orchestrator | 2025-09-19 07:04:25 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:25.113081 | orchestrator | 2025-09-19 07:04:25 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:25.113206 | orchestrator | 2025-09-19 07:04:25 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:25.114216 | orchestrator | 2025-09-19 07:04:25 | INFO  | Task 6c0ea851-b1d5-44b0-88d8-3e5f87fdbafd is in state SUCCESS 2025-09-19 07:04:25.114288 | orchestrator | 2025-09-19 07:04:25.114333 | orchestrator | 2025-09-19 07:04:25.114347 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:04:25.114358 | orchestrator | 2025-09-19 07:04:25.114369 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:04:25.114380 | orchestrator | Friday 19 September 2025 07:03:53 +0000 (0:00:00.518) 0:00:00.518 ****** 2025-09-19 07:04:25.114391 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:04:25.114431 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:04:25.114443 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:04:25.114454 | orchestrator | 2025-09-19 07:04:25.114465 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:04:25.114475 | orchestrator | Friday 19 September 2025 07:03:53 +0000 (0:00:00.442) 0:00:00.961 ****** 2025-09-19 07:04:25.114486 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-19 07:04:25.114498 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-19 07:04:25.114509 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-19 07:04:25.114520 | orchestrator | 2025-09-19 07:04:25.114531 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-19 07:04:25.114541 | orchestrator | 2025-09-19 07:04:25.114552 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-19 07:04:25.114563 | orchestrator | Friday 19 September 2025 07:03:54 +0000 (0:00:00.942) 0:00:01.903 ****** 2025-09-19 07:04:25.114573 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:04:25.114585 | orchestrator | 2025-09-19 07:04:25.114595 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-19 07:04:25.114606 | orchestrator | Friday 19 September 2025 07:03:55 +0000 (0:00:00.803) 0:00:02.707 ****** 2025-09-19 07:04:25.114617 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-19 07:04:25.114628 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-19 07:04:25.114639 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-19 07:04:25.114650 | orchestrator | 2025-09-19 07:04:25.114660 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-19 07:04:25.114671 | orchestrator | Friday 19 September 2025 07:03:56 +0000 (0:00:00.722) 0:00:03.430 ****** 2025-09-19 07:04:25.114681 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-19 07:04:25.114692 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-19 07:04:25.114703 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-19 07:04:25.114714 | orchestrator | 2025-09-19 07:04:25.114725 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-19 07:04:25.114735 | orchestrator | Friday 19 September 2025 07:03:58 +0000 (0:00:01.844) 0:00:05.274 ****** 2025-09-19 07:04:25.114746 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:04:25.114757 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:04:25.114768 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:04:25.114778 | orchestrator | 2025-09-19 07:04:25.114789 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-19 07:04:25.114800 | orchestrator | Friday 19 September 2025 07:04:00 +0000 (0:00:01.926) 0:00:07.201 ****** 2025-09-19 07:04:25.114810 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:04:25.114821 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:04:25.114832 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:04:25.114842 | orchestrator | 2025-09-19 07:04:25.114853 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:04:25.114864 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:04:25.114876 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:04:25.114888 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:04:25.114899 | orchestrator | 2025-09-19 07:04:25.114909 | orchestrator | 2025-09-19 07:04:25.114920 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:04:25.114931 | orchestrator | Friday 19 September 2025 07:04:06 +0000 (0:00:06.750) 0:00:13.952 ****** 2025-09-19 07:04:25.114949 | orchestrator | =============================================================================== 2025-09-19 07:04:25.114959 | orchestrator | memcached : Restart memcached container --------------------------------- 6.75s 2025-09-19 07:04:25.114970 | orchestrator | memcached : Check memcached container ----------------------------------- 1.93s 2025-09-19 07:04:25.114981 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.84s 2025-09-19 07:04:25.114992 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2025-09-19 07:04:25.115002 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.80s 2025-09-19 07:04:25.115013 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.72s 2025-09-19 07:04:25.115024 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-09-19 07:04:25.115034 | orchestrator | 2025-09-19 07:04:25.115055 | orchestrator | 2025-09-19 07:04:25.115066 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:04:25.115077 | orchestrator | 2025-09-19 07:04:25.115088 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:04:25.115098 | orchestrator | Friday 19 September 2025 07:03:53 +0000 (0:00:00.409) 0:00:00.409 ****** 2025-09-19 07:04:25.115109 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:04:25.115121 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:04:25.115141 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:04:25.115160 | orchestrator | 2025-09-19 07:04:25.115177 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:04:25.115195 | orchestrator | Friday 19 September 2025 07:03:54 +0000 (0:00:00.599) 0:00:01.009 ****** 2025-09-19 07:04:25.115215 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-19 07:04:25.115234 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-19 07:04:25.115253 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-19 07:04:25.115265 | orchestrator | 2025-09-19 07:04:25.115276 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-19 07:04:25.115287 | orchestrator | 2025-09-19 07:04:25.115298 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-19 07:04:25.115328 | orchestrator | Friday 19 September 2025 07:03:54 +0000 (0:00:00.419) 0:00:01.428 ****** 2025-09-19 07:04:25.115339 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:04:25.115350 | orchestrator | 2025-09-19 07:04:25.115361 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-19 07:04:25.115371 | orchestrator | Friday 19 September 2025 07:03:55 +0000 (0:00:00.712) 0:00:02.140 ****** 2025-09-19 07:04:25.115385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115503 | orchestrator | 2025-09-19 07:04:25.115514 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-19 07:04:25.115525 | orchestrator | Friday 19 September 2025 07:03:56 +0000 (0:00:01.325) 0:00:03.466 ****** 2025-09-19 07:04:25.115537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115625 | orchestrator | 2025-09-19 07:04:25.115636 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-19 07:04:25.115647 | orchestrator | Friday 19 September 2025 07:03:59 +0000 (0:00:02.679) 0:00:06.145 ****** 2025-09-19 07:04:25.115658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115745 | orchestrator | 2025-09-19 07:04:25.115755 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-19 07:04:25.115766 | orchestrator | Friday 19 September 2025 07:04:02 +0000 (0:00:02.912) 0:00:09.058 ****** 2025-09-19 07:04:25.115777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 07:04:25.115863 | orchestrator | 2025-09-19 07:04:25.115874 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 07:04:25.115885 | orchestrator | Friday 19 September 2025 07:04:04 +0000 (0:00:02.227) 0:00:11.285 ****** 2025-09-19 07:04:25.115895 | orchestrator | 2025-09-19 07:04:25.115906 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 07:04:25.115917 | orchestrator | Friday 19 September 2025 07:04:04 +0000 (0:00:00.153) 0:00:11.439 ****** 2025-09-19 07:04:25.115928 | orchestrator | 2025-09-19 07:04:25.115939 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 07:04:25.115950 | orchestrator | Friday 19 September 2025 07:04:04 +0000 (0:00:00.125) 0:00:11.564 ****** 2025-09-19 07:04:25.115960 | orchestrator | 2025-09-19 07:04:25.115971 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-19 07:04:25.115982 | orchestrator | Friday 19 September 2025 07:04:04 +0000 (0:00:00.116) 0:00:11.681 ****** 2025-09-19 07:04:25.115993 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:04:25.116003 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:04:25.116014 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:04:25.116025 | orchestrator | 2025-09-19 07:04:25.116036 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-19 07:04:25.116046 | orchestrator | Friday 19 September 2025 07:04:12 +0000 (0:00:07.631) 0:00:19.312 ****** 2025-09-19 07:04:25.116063 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:04:25.116073 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:04:25.116084 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:04:25.116095 | orchestrator | 2025-09-19 07:04:25.116105 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:04:25.116116 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:04:25.116127 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:04:25.116138 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:04:25.116149 | orchestrator | 2025-09-19 07:04:25.116159 | orchestrator | 2025-09-19 07:04:25.116170 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:04:25.116181 | orchestrator | Friday 19 September 2025 07:04:22 +0000 (0:00:09.775) 0:00:29.088 ****** 2025-09-19 07:04:25.116191 | orchestrator | =============================================================================== 2025-09-19 07:04:25.116202 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.78s 2025-09-19 07:04:25.116213 | orchestrator | redis : Restart redis container ----------------------------------------- 7.63s 2025-09-19 07:04:25.116223 | orchestrator | redis : Copying over redis config files --------------------------------- 2.91s 2025-09-19 07:04:25.116233 | orchestrator | redis : Copying over default config.json files -------------------------- 2.68s 2025-09-19 07:04:25.116244 | orchestrator | redis : Check redis containers ------------------------------------------ 2.23s 2025-09-19 07:04:25.116255 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.33s 2025-09-19 07:04:25.116265 | orchestrator | redis : include_tasks --------------------------------------------------- 0.71s 2025-09-19 07:04:25.116276 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.60s 2025-09-19 07:04:25.116286 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-09-19 07:04:25.116297 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.40s 2025-09-19 07:04:25.116321 | orchestrator | 2025-09-19 07:04:25 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:25.116333 | orchestrator | 2025-09-19 07:04:25 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:25.116344 | orchestrator | 2025-09-19 07:04:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:28.244283 | orchestrator | 2025-09-19 07:04:28 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:28.244352 | orchestrator | 2025-09-19 07:04:28 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:28.244358 | orchestrator | 2025-09-19 07:04:28 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:28.244362 | orchestrator | 2025-09-19 07:04:28 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:28.244376 | orchestrator | 2025-09-19 07:04:28 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:28.244380 | orchestrator | 2025-09-19 07:04:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:31.225839 | orchestrator | 2025-09-19 07:04:31 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:31.226382 | orchestrator | 2025-09-19 07:04:31 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:31.226574 | orchestrator | 2025-09-19 07:04:31 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:31.227106 | orchestrator | 2025-09-19 07:04:31 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:31.227729 | orchestrator | 2025-09-19 07:04:31 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:31.227903 | orchestrator | 2025-09-19 07:04:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:34.372568 | orchestrator | 2025-09-19 07:04:34 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:34.372649 | orchestrator | 2025-09-19 07:04:34 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:34.372664 | orchestrator | 2025-09-19 07:04:34 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:34.372675 | orchestrator | 2025-09-19 07:04:34 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:34.372686 | orchestrator | 2025-09-19 07:04:34 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:34.372697 | orchestrator | 2025-09-19 07:04:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:37.324119 | orchestrator | 2025-09-19 07:04:37 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:37.326005 | orchestrator | 2025-09-19 07:04:37 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:37.326713 | orchestrator | 2025-09-19 07:04:37 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:37.329912 | orchestrator | 2025-09-19 07:04:37 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:37.330556 | orchestrator | 2025-09-19 07:04:37 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:37.330701 | orchestrator | 2025-09-19 07:04:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:40.379108 | orchestrator | 2025-09-19 07:04:40 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:40.379203 | orchestrator | 2025-09-19 07:04:40 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:40.379227 | orchestrator | 2025-09-19 07:04:40 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:40.379246 | orchestrator | 2025-09-19 07:04:40 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:40.379266 | orchestrator | 2025-09-19 07:04:40 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:40.380404 | orchestrator | 2025-09-19 07:04:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:43.446812 | orchestrator | 2025-09-19 07:04:43 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:43.446898 | orchestrator | 2025-09-19 07:04:43 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:43.446913 | orchestrator | 2025-09-19 07:04:43 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:43.446924 | orchestrator | 2025-09-19 07:04:43 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:43.446935 | orchestrator | 2025-09-19 07:04:43 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:43.446945 | orchestrator | 2025-09-19 07:04:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:46.464378 | orchestrator | 2025-09-19 07:04:46 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:46.464889 | orchestrator | 2025-09-19 07:04:46 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:46.465809 | orchestrator | 2025-09-19 07:04:46 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:46.466495 | orchestrator | 2025-09-19 07:04:46 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:46.467333 | orchestrator | 2025-09-19 07:04:46 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:46.467358 | orchestrator | 2025-09-19 07:04:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:49.519463 | orchestrator | 2025-09-19 07:04:49 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:49.520064 | orchestrator | 2025-09-19 07:04:49 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:49.520772 | orchestrator | 2025-09-19 07:04:49 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:49.521644 | orchestrator | 2025-09-19 07:04:49 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:49.522274 | orchestrator | 2025-09-19 07:04:49 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:49.522407 | orchestrator | 2025-09-19 07:04:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:52.556483 | orchestrator | 2025-09-19 07:04:52 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:52.556672 | orchestrator | 2025-09-19 07:04:52 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:52.557500 | orchestrator | 2025-09-19 07:04:52 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:52.558135 | orchestrator | 2025-09-19 07:04:52 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:52.559064 | orchestrator | 2025-09-19 07:04:52 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:52.559135 | orchestrator | 2025-09-19 07:04:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:55.600341 | orchestrator | 2025-09-19 07:04:55 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state STARTED 2025-09-19 07:04:55.600421 | orchestrator | 2025-09-19 07:04:55 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:55.603077 | orchestrator | 2025-09-19 07:04:55 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:55.603529 | orchestrator | 2025-09-19 07:04:55 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:55.606505 | orchestrator | 2025-09-19 07:04:55 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:55.606547 | orchestrator | 2025-09-19 07:04:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:04:58.634000 | orchestrator | 2025-09-19 07:04:58 | INFO  | Task da7745a8-fa6b-4460-9123-67f5cc043a4e is in state SUCCESS 2025-09-19 07:04:58.635119 | orchestrator | 2025-09-19 07:04:58.635171 | orchestrator | 2025-09-19 07:04:58.635183 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:04:58.635193 | orchestrator | 2025-09-19 07:04:58.635202 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:04:58.635210 | orchestrator | Friday 19 September 2025 07:03:53 +0000 (0:00:00.603) 0:00:00.603 ****** 2025-09-19 07:04:58.635219 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:04:58.635229 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:04:58.635237 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:04:58.635264 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:04:58.635273 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:04:58.635282 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:04:58.635333 | orchestrator | 2025-09-19 07:04:58.635342 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:04:58.635351 | orchestrator | Friday 19 September 2025 07:03:55 +0000 (0:00:01.315) 0:00:01.919 ****** 2025-09-19 07:04:58.635359 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 07:04:58.635368 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 07:04:58.635377 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 07:04:58.635385 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 07:04:58.635394 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 07:04:58.635428 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 07:04:58.635437 | orchestrator | 2025-09-19 07:04:58.635446 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-19 07:04:58.635455 | orchestrator | 2025-09-19 07:04:58.635464 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-19 07:04:58.635473 | orchestrator | Friday 19 September 2025 07:03:56 +0000 (0:00:00.862) 0:00:02.782 ****** 2025-09-19 07:04:58.635483 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:04:58.635492 | orchestrator | 2025-09-19 07:04:58.635501 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 07:04:58.635520 | orchestrator | Friday 19 September 2025 07:03:57 +0000 (0:00:01.115) 0:00:03.897 ****** 2025-09-19 07:04:58.635530 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-19 07:04:58.635539 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-19 07:04:58.635548 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-19 07:04:58.635557 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-19 07:04:58.635565 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-19 07:04:58.635574 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-19 07:04:58.635582 | orchestrator | 2025-09-19 07:04:58.635591 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 07:04:58.635600 | orchestrator | Friday 19 September 2025 07:03:58 +0000 (0:00:01.227) 0:00:05.125 ****** 2025-09-19 07:04:58.635609 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-19 07:04:58.635617 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-19 07:04:58.635626 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-19 07:04:58.635635 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-19 07:04:58.635643 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-19 07:04:58.635652 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-19 07:04:58.635660 | orchestrator | 2025-09-19 07:04:58.635671 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 07:04:58.635681 | orchestrator | Friday 19 September 2025 07:04:00 +0000 (0:00:01.833) 0:00:06.958 ****** 2025-09-19 07:04:58.635691 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-19 07:04:58.635701 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:04:58.635711 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-19 07:04:58.635721 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:04:58.635731 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-19 07:04:58.635741 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:04:58.635751 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-19 07:04:58.635768 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:04:58.635778 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-19 07:04:58.635788 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:04:58.635798 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-19 07:04:58.635808 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:04:58.635819 | orchestrator | 2025-09-19 07:04:58.635829 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-19 07:04:58.635838 | orchestrator | Friday 19 September 2025 07:04:02 +0000 (0:00:01.893) 0:00:08.851 ****** 2025-09-19 07:04:58.635846 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:04:58.635855 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:04:58.635863 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:04:58.635872 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:04:58.635880 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:04:58.635889 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:04:58.635897 | orchestrator | 2025-09-19 07:04:58.635905 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-19 07:04:58.635914 | orchestrator | Friday 19 September 2025 07:04:03 +0000 (0:00:01.263) 0:00:10.114 ****** 2025-09-19 07:04:58.635939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.635953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.635966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.635976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.635992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636026 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636039 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636048 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636062 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636086 | orchestrator | 2025-09-19 07:04:58.636121 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-19 07:04:58.636131 | orchestrator | Friday 19 September 2025 07:04:05 +0000 (0:00:02.131) 0:00:12.245 ****** 2025-09-19 07:04:58.636140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636279 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636303 | orchestrator | 2025-09-19 07:04:58.636312 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-19 07:04:58.636321 | orchestrator | Friday 19 September 2025 07:04:08 +0000 (0:00:02.553) 0:00:14.799 ****** 2025-09-19 07:04:58.636330 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:04:58.636339 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:04:58.636348 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:04:58.636356 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:04:58.636365 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:04:58.636374 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:04:58.636382 | orchestrator | 2025-09-19 07:04:58.636391 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-19 07:04:58.636400 | orchestrator | Friday 19 September 2025 07:04:09 +0000 (0:00:01.210) 0:00:16.009 ****** 2025-09-19 07:04:58.636412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636446 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636519 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636534 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 07:04:58.636557 | orchestrator | 2025-09-19 07:04:58.636566 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 07:04:58.636575 | orchestrator | Friday 19 September 2025 07:04:11 +0000 (0:00:02.081) 0:00:18.090 ****** 2025-09-19 07:04:58.636584 | orchestrator | 2025-09-19 07:04:58.636593 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 07:04:58.636601 | orchestrator | Friday 19 September 2025 07:04:11 +0000 (0:00:00.570) 0:00:18.661 ****** 2025-09-19 07:04:58.636610 | orchestrator | 2025-09-19 07:04:58.636618 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 07:04:58.636631 | orchestrator | Friday 19 September 2025 07:04:12 +0000 (0:00:00.140) 0:00:18.802 ****** 2025-09-19 07:04:58.636639 | orchestrator | 2025-09-19 07:04:58.636648 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 07:04:58.636657 | orchestrator | Friday 19 September 2025 07:04:12 +0000 (0:00:00.163) 0:00:18.966 ****** 2025-09-19 07:04:58.636666 | orchestrator | 2025-09-19 07:04:58.636674 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 07:04:58.636683 | orchestrator | Friday 19 September 2025 07:04:12 +0000 (0:00:00.305) 0:00:19.271 ****** 2025-09-19 07:04:58.636692 | orchestrator | 2025-09-19 07:04:58.636700 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 07:04:58.636709 | orchestrator | Friday 19 September 2025 07:04:12 +0000 (0:00:00.342) 0:00:19.613 ****** 2025-09-19 07:04:58.636717 | orchestrator | 2025-09-19 07:04:58.636726 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-19 07:04:58.636735 | orchestrator | Friday 19 September 2025 07:04:13 +0000 (0:00:00.336) 0:00:19.950 ****** 2025-09-19 07:04:58.636743 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:04:58.636752 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:04:58.636761 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:04:58.636769 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:04:58.636778 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:04:58.636786 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:04:58.636816 | orchestrator | 2025-09-19 07:04:58.636825 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-19 07:04:58.636834 | orchestrator | Friday 19 September 2025 07:04:24 +0000 (0:00:11.559) 0:00:31.509 ****** 2025-09-19 07:04:58.636842 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:04:58.636871 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:04:58.636881 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:04:58.636890 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:04:58.636899 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:04:58.636907 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:04:58.636924 | orchestrator | 2025-09-19 07:04:58.636933 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-19 07:04:58.636942 | orchestrator | Friday 19 September 2025 07:04:26 +0000 (0:00:01.668) 0:00:33.178 ****** 2025-09-19 07:04:58.636951 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:04:58.636959 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:04:58.636968 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:04:58.636976 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:04:58.636985 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:04:58.636993 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:04:58.637002 | orchestrator | 2025-09-19 07:04:58.637010 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-19 07:04:58.637019 | orchestrator | Friday 19 September 2025 07:04:33 +0000 (0:00:06.768) 0:00:39.946 ****** 2025-09-19 07:04:58.637028 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-19 07:04:58.637037 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-19 07:04:58.637045 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-19 07:04:58.637059 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-19 07:04:58.637068 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-19 07:04:58.637082 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-19 07:04:58.637091 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-19 07:04:58.637099 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-19 07:04:58.637108 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-19 07:04:58.637116 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-19 07:04:58.637125 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-19 07:04:58.637133 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-19 07:04:58.637141 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 07:04:58.637150 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 07:04:58.637158 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 07:04:58.637167 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 07:04:58.637175 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 07:04:58.637184 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 07:04:58.637192 | orchestrator | 2025-09-19 07:04:58.637201 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-19 07:04:58.637214 | orchestrator | Friday 19 September 2025 07:04:40 +0000 (0:00:07.301) 0:00:47.247 ****** 2025-09-19 07:04:58.637223 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-19 07:04:58.637232 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:04:58.637240 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-19 07:04:58.637249 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:04:58.637257 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-19 07:04:58.637266 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:04:58.637275 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-19 07:04:58.637297 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-19 07:04:58.637306 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-19 07:04:58.637315 | orchestrator | 2025-09-19 07:04:58.637324 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-19 07:04:58.637332 | orchestrator | Friday 19 September 2025 07:04:43 +0000 (0:00:02.828) 0:00:50.076 ****** 2025-09-19 07:04:58.637341 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-19 07:04:58.637349 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:04:58.637358 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-19 07:04:58.637366 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:04:58.637375 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-19 07:04:58.637383 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:04:58.637392 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-19 07:04:58.637406 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-19 07:04:58.637414 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-19 07:04:58.637422 | orchestrator | 2025-09-19 07:04:58.637431 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-19 07:04:58.637439 | orchestrator | Friday 19 September 2025 07:04:46 +0000 (0:00:03.495) 0:00:53.571 ****** 2025-09-19 07:04:58.637448 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:04:58.637456 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:04:58.637465 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:04:58.637473 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:04:58.637482 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:04:58.637490 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:04:58.637499 | orchestrator | 2025-09-19 07:04:58.637507 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:04:58.637516 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 07:04:58.637525 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 07:04:58.637534 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 07:04:58.637542 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:04:58.637551 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:04:58.637565 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:04:58.637574 | orchestrator | 2025-09-19 07:04:58.637582 | orchestrator | 2025-09-19 07:04:58.637591 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:04:58.637599 | orchestrator | Friday 19 September 2025 07:04:55 +0000 (0:00:09.113) 0:01:02.685 ****** 2025-09-19 07:04:58.637608 | orchestrator | =============================================================================== 2025-09-19 07:04:58.637617 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.88s 2025-09-19 07:04:58.637625 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.56s 2025-09-19 07:04:58.637634 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.30s 2025-09-19 07:04:58.637642 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.50s 2025-09-19 07:04:58.637651 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.83s 2025-09-19 07:04:58.637659 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.55s 2025-09-19 07:04:58.637668 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.13s 2025-09-19 07:04:58.637676 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.08s 2025-09-19 07:04:58.637684 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.89s 2025-09-19 07:04:58.637693 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.86s 2025-09-19 07:04:58.637701 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.83s 2025-09-19 07:04:58.637710 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.67s 2025-09-19 07:04:58.637718 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.32s 2025-09-19 07:04:58.637727 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.26s 2025-09-19 07:04:58.637735 | orchestrator | module-load : Load modules ---------------------------------------------- 1.23s 2025-09-19 07:04:58.637753 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.21s 2025-09-19 07:04:58.637761 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.12s 2025-09-19 07:04:58.637770 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.86s 2025-09-19 07:04:58.637779 | orchestrator | 2025-09-19 07:04:58 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:04:58.637889 | orchestrator | 2025-09-19 07:04:58 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:04:58.637901 | orchestrator | 2025-09-19 07:04:58 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:04:58.637910 | orchestrator | 2025-09-19 07:04:58 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:04:58.637918 | orchestrator | 2025-09-19 07:04:58 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:04:58.637927 | orchestrator | 2025-09-19 07:04:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:01.804640 | orchestrator | 2025-09-19 07:05:01 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:01.805543 | orchestrator | 2025-09-19 07:05:01 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:01.806853 | orchestrator | 2025-09-19 07:05:01 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:01.809474 | orchestrator | 2025-09-19 07:05:01 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:01.809513 | orchestrator | 2025-09-19 07:05:01 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:05:01.809525 | orchestrator | 2025-09-19 07:05:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:05.005472 | orchestrator | 2025-09-19 07:05:05 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:05.005878 | orchestrator | 2025-09-19 07:05:05 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:05.006529 | orchestrator | 2025-09-19 07:05:05 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:05.007118 | orchestrator | 2025-09-19 07:05:05 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:05.007788 | orchestrator | 2025-09-19 07:05:05 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:05:05.007810 | orchestrator | 2025-09-19 07:05:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:08.063691 | orchestrator | 2025-09-19 07:05:08 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:08.063839 | orchestrator | 2025-09-19 07:05:08 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:08.064394 | orchestrator | 2025-09-19 07:05:08 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:08.064962 | orchestrator | 2025-09-19 07:05:08 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:08.065605 | orchestrator | 2025-09-19 07:05:08 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:05:08.065628 | orchestrator | 2025-09-19 07:05:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:11.089442 | orchestrator | 2025-09-19 07:05:11 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:11.089591 | orchestrator | 2025-09-19 07:05:11 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:11.090097 | orchestrator | 2025-09-19 07:05:11 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:11.090569 | orchestrator | 2025-09-19 07:05:11 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:11.091149 | orchestrator | 2025-09-19 07:05:11 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state STARTED 2025-09-19 07:05:11.091172 | orchestrator | 2025-09-19 07:05:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:14.118257 | orchestrator | 2025-09-19 07:05:14 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:14.118426 | orchestrator | 2025-09-19 07:05:14 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:14.119022 | orchestrator | 2025-09-19 07:05:14 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:14.120878 | orchestrator | 2025-09-19 07:05:14 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:14.121850 | orchestrator | 2025-09-19 07:05:14 | INFO  | Task 06c7df94-399e-4a14-9f78-f2b2c012b2d6 is in state SUCCESS 2025-09-19 07:05:14.123346 | orchestrator | 2025-09-19 07:05:14.123417 | orchestrator | 2025-09-19 07:05:14.123433 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-19 07:05:14.123526 | orchestrator | 2025-09-19 07:05:14.123538 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-19 07:05:14.123549 | orchestrator | Friday 19 September 2025 07:01:26 +0000 (0:00:00.160) 0:00:00.160 ****** 2025-09-19 07:05:14.123561 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:05:14.123572 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:05:14.123583 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:05:14.123594 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.123604 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.123615 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.123626 | orchestrator | 2025-09-19 07:05:14.123637 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-19 07:05:14.123647 | orchestrator | Friday 19 September 2025 07:01:27 +0000 (0:00:00.782) 0:00:00.943 ****** 2025-09-19 07:05:14.123658 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:05:14.123669 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:05:14.123680 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:05:14.123691 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.123701 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.123712 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.123723 | orchestrator | 2025-09-19 07:05:14.123734 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-19 07:05:14.123745 | orchestrator | Friday 19 September 2025 07:01:28 +0000 (0:00:00.703) 0:00:01.647 ****** 2025-09-19 07:05:14.123755 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:05:14.123766 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:05:14.123777 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:05:14.123788 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.123798 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.123809 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.123820 | orchestrator | 2025-09-19 07:05:14.123831 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-19 07:05:14.123842 | orchestrator | Friday 19 September 2025 07:01:28 +0000 (0:00:00.701) 0:00:02.348 ****** 2025-09-19 07:05:14.123853 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:05:14.123864 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:05:14.123875 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:05:14.123885 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.123896 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.123928 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.123939 | orchestrator | 2025-09-19 07:05:14.123951 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-19 07:05:14.123962 | orchestrator | Friday 19 September 2025 07:01:31 +0000 (0:00:02.758) 0:00:05.106 ****** 2025-09-19 07:05:14.123973 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:05:14.123984 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:05:14.123994 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:05:14.124005 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.124015 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.124026 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.124036 | orchestrator | 2025-09-19 07:05:14.124047 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-19 07:05:14.124057 | orchestrator | Friday 19 September 2025 07:01:32 +0000 (0:00:01.055) 0:00:06.162 ****** 2025-09-19 07:05:14.124068 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:05:14.124079 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:05:14.124089 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:05:14.124100 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.124110 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.124120 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.124131 | orchestrator | 2025-09-19 07:05:14.124142 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-19 07:05:14.124152 | orchestrator | Friday 19 September 2025 07:01:33 +0000 (0:00:01.275) 0:00:07.438 ****** 2025-09-19 07:05:14.124163 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:05:14.124173 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:05:14.124184 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:05:14.124194 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.124205 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.124215 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.124226 | orchestrator | 2025-09-19 07:05:14.124237 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-19 07:05:14.124247 | orchestrator | Friday 19 September 2025 07:01:34 +0000 (0:00:00.569) 0:00:08.007 ****** 2025-09-19 07:05:14.124258 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:05:14.124269 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:05:14.124308 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:05:14.124319 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.124330 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.124340 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.124351 | orchestrator | 2025-09-19 07:05:14.124362 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-19 07:05:14.124373 | orchestrator | Friday 19 September 2025 07:01:35 +0000 (0:00:01.151) 0:00:09.159 ****** 2025-09-19 07:05:14.124384 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:05:14.124395 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:05:14.124405 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:05:14.124416 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:05:14.124427 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:05:14.124438 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:05:14.124448 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:05:14.124459 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:05:14.124470 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:05:14.124492 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:05:14.124520 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:05:14.124531 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.124542 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:05:14.124561 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:05:14.124572 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:05:14.124582 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.124593 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:05:14.124604 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.124615 | orchestrator | 2025-09-19 07:05:14.124625 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-19 07:05:14.124636 | orchestrator | Friday 19 September 2025 07:01:36 +0000 (0:00:01.077) 0:00:10.237 ****** 2025-09-19 07:05:14.124646 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:05:14.124657 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:05:14.124668 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:05:14.124678 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.124689 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.124700 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.124710 | orchestrator | 2025-09-19 07:05:14.124721 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-19 07:05:14.124733 | orchestrator | Friday 19 September 2025 07:01:38 +0000 (0:00:01.863) 0:00:12.100 ****** 2025-09-19 07:05:14.124744 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:05:14.124755 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:05:14.124766 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:05:14.124776 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.124787 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.124797 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.124808 | orchestrator | 2025-09-19 07:05:14.124819 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-19 07:05:14.124829 | orchestrator | Friday 19 September 2025 07:01:39 +0000 (0:00:00.960) 0:00:13.061 ****** 2025-09-19 07:05:14.124840 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:05:14.124851 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.124862 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:05:14.124872 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.124883 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:05:14.124893 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.124904 | orchestrator | 2025-09-19 07:05:14.124915 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-19 07:05:14.124926 | orchestrator | Friday 19 September 2025 07:01:45 +0000 (0:00:06.103) 0:00:19.164 ****** 2025-09-19 07:05:14.124936 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:05:14.124947 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:05:14.124958 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:05:14.124969 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.124979 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.124990 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.125000 | orchestrator | 2025-09-19 07:05:14.125011 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-19 07:05:14.125022 | orchestrator | Friday 19 September 2025 07:01:48 +0000 (0:00:02.361) 0:00:21.526 ****** 2025-09-19 07:05:14.125033 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:05:14.125044 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.125054 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:05:14.125065 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:05:14.125075 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.125086 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.125097 | orchestrator | 2025-09-19 07:05:14.125108 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-19 07:05:14.125120 | orchestrator | Friday 19 September 2025 07:01:50 +0000 (0:00:02.176) 0:00:23.702 ****** 2025-09-19 07:05:14.125136 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:05:14.125147 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:05:14.125157 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:05:14.125168 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.125179 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.125190 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.125200 | orchestrator | 2025-09-19 07:05:14.125211 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-19 07:05:14.125222 | orchestrator | Friday 19 September 2025 07:01:51 +0000 (0:00:01.154) 0:00:24.857 ****** 2025-09-19 07:05:14.125233 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-19 07:05:14.125244 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-19 07:05:14.125255 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-19 07:05:14.125266 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-19 07:05:14.125294 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-19 07:05:14.125306 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-19 07:05:14.125316 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-19 07:05:14.125327 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-19 07:05:14.125338 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-19 07:05:14.125349 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-19 07:05:14.125359 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-19 07:05:14.125370 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-19 07:05:14.125381 | orchestrator | 2025-09-19 07:05:14.125392 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-19 07:05:14.125403 | orchestrator | Friday 19 September 2025 07:01:53 +0000 (0:00:01.732) 0:00:26.590 ****** 2025-09-19 07:05:14.125413 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:05:14.125424 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:05:14.125440 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:05:14.125451 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.125462 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.125473 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.125483 | orchestrator | 2025-09-19 07:05:14.125502 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-19 07:05:14.125513 | orchestrator | 2025-09-19 07:05:14.125524 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-19 07:05:14.125535 | orchestrator | Friday 19 September 2025 07:01:54 +0000 (0:00:01.732) 0:00:28.322 ****** 2025-09-19 07:05:14.125546 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.125557 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.125568 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.125579 | orchestrator | 2025-09-19 07:05:14.125590 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-19 07:05:14.125601 | orchestrator | Friday 19 September 2025 07:01:55 +0000 (0:00:00.849) 0:00:29.172 ****** 2025-09-19 07:05:14.125612 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.125623 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.125634 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.125645 | orchestrator | 2025-09-19 07:05:14.125656 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-19 07:05:14.125667 | orchestrator | Friday 19 September 2025 07:01:57 +0000 (0:00:01.295) 0:00:30.467 ****** 2025-09-19 07:05:14.125678 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.125689 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.125699 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.125710 | orchestrator | 2025-09-19 07:05:14.125721 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-19 07:05:14.125732 | orchestrator | Friday 19 September 2025 07:01:57 +0000 (0:00:00.961) 0:00:31.429 ****** 2025-09-19 07:05:14.125743 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.125754 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.125769 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.125780 | orchestrator | 2025-09-19 07:05:14.125790 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-19 07:05:14.125801 | orchestrator | Friday 19 September 2025 07:01:59 +0000 (0:00:01.267) 0:00:32.696 ****** 2025-09-19 07:05:14.125812 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.125823 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.125834 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.125845 | orchestrator | 2025-09-19 07:05:14.125856 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-19 07:05:14.125867 | orchestrator | Friday 19 September 2025 07:01:59 +0000 (0:00:00.557) 0:00:33.254 ****** 2025-09-19 07:05:14.125877 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.125888 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.125899 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.125910 | orchestrator | 2025-09-19 07:05:14.125920 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-19 07:05:14.125931 | orchestrator | Friday 19 September 2025 07:02:01 +0000 (0:00:01.258) 0:00:34.513 ****** 2025-09-19 07:05:14.125942 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.125953 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.125964 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.125974 | orchestrator | 2025-09-19 07:05:14.125985 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-19 07:05:14.125996 | orchestrator | Friday 19 September 2025 07:02:02 +0000 (0:00:01.896) 0:00:36.409 ****** 2025-09-19 07:05:14.126006 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:05:14.126064 | orchestrator | 2025-09-19 07:05:14.126080 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-19 07:05:14.126091 | orchestrator | Friday 19 September 2025 07:02:03 +0000 (0:00:00.587) 0:00:36.997 ****** 2025-09-19 07:05:14.126102 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.126114 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.126125 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.126136 | orchestrator | 2025-09-19 07:05:14.126147 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-19 07:05:14.126158 | orchestrator | Friday 19 September 2025 07:02:05 +0000 (0:00:02.375) 0:00:39.372 ****** 2025-09-19 07:05:14.126170 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.126181 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.126192 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.126203 | orchestrator | 2025-09-19 07:05:14.126214 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-19 07:05:14.126226 | orchestrator | Friday 19 September 2025 07:02:06 +0000 (0:00:00.841) 0:00:40.214 ****** 2025-09-19 07:05:14.126237 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.126248 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.126259 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.126270 | orchestrator | 2025-09-19 07:05:14.126337 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-19 07:05:14.126348 | orchestrator | Friday 19 September 2025 07:02:07 +0000 (0:00:00.919) 0:00:41.134 ****** 2025-09-19 07:05:14.126359 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.126370 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.126381 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.126391 | orchestrator | 2025-09-19 07:05:14.126402 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-19 07:05:14.126412 | orchestrator | Friday 19 September 2025 07:02:09 +0000 (0:00:01.496) 0:00:42.630 ****** 2025-09-19 07:05:14.126423 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.126434 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.126444 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.126455 | orchestrator | 2025-09-19 07:05:14.126473 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-19 07:05:14.126483 | orchestrator | Friday 19 September 2025 07:02:09 +0000 (0:00:00.519) 0:00:43.150 ****** 2025-09-19 07:05:14.126494 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.126505 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.126515 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.126526 | orchestrator | 2025-09-19 07:05:14.126537 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-19 07:05:14.126562 | orchestrator | Friday 19 September 2025 07:02:10 +0000 (0:00:00.462) 0:00:43.612 ****** 2025-09-19 07:05:14.126574 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.126585 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.126596 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.126606 | orchestrator | 2025-09-19 07:05:14.126624 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-19 07:05:14.126636 | orchestrator | Friday 19 September 2025 07:02:13 +0000 (0:00:02.947) 0:00:46.560 ****** 2025-09-19 07:05:14.126647 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 07:05:14.126657 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 07:05:14.126667 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 07:05:14.126677 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 07:05:14.126686 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 07:05:14.126696 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 07:05:14.126706 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 07:05:14.126715 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 07:05:14.126725 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 07:05:14.126734 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 07:05:14.126744 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 07:05:14.126753 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 07:05:14.126763 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 07:05:14.126772 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 07:05:14.126782 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 07:05:14.126791 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.126801 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.126810 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.126820 | orchestrator | 2025-09-19 07:05:14.126829 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-19 07:05:14.126839 | orchestrator | Friday 19 September 2025 07:03:08 +0000 (0:00:55.013) 0:01:41.574 ****** 2025-09-19 07:05:14.126855 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.126864 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.126874 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.126883 | orchestrator | 2025-09-19 07:05:14.126893 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-19 07:05:14.126903 | orchestrator | Friday 19 September 2025 07:03:08 +0000 (0:00:00.454) 0:01:42.028 ****** 2025-09-19 07:05:14.126912 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.126922 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.126931 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.126941 | orchestrator | 2025-09-19 07:05:14.126950 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-19 07:05:14.126960 | orchestrator | Friday 19 September 2025 07:03:09 +0000 (0:00:01.115) 0:01:43.144 ****** 2025-09-19 07:05:14.126969 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.126979 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.126988 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.126997 | orchestrator | 2025-09-19 07:05:14.127007 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-19 07:05:14.127016 | orchestrator | Friday 19 September 2025 07:03:11 +0000 (0:00:01.549) 0:01:44.694 ****** 2025-09-19 07:05:14.127026 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.127035 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.127045 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.127054 | orchestrator | 2025-09-19 07:05:14.127064 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-19 07:05:14.127073 | orchestrator | Friday 19 September 2025 07:03:37 +0000 (0:00:26.237) 0:02:10.931 ****** 2025-09-19 07:05:14.127082 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.127092 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.127101 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.127111 | orchestrator | 2025-09-19 07:05:14.127120 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-19 07:05:14.127134 | orchestrator | Friday 19 September 2025 07:03:38 +0000 (0:00:00.699) 0:02:11.631 ****** 2025-09-19 07:05:14.127143 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.127153 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.127162 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.127172 | orchestrator | 2025-09-19 07:05:14.127187 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-19 07:05:14.127197 | orchestrator | Friday 19 September 2025 07:03:38 +0000 (0:00:00.660) 0:02:12.291 ****** 2025-09-19 07:05:14.127207 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.127216 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.127226 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.127236 | orchestrator | 2025-09-19 07:05:14.127245 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-19 07:05:14.127255 | orchestrator | Friday 19 September 2025 07:03:39 +0000 (0:00:00.716) 0:02:13.008 ****** 2025-09-19 07:05:14.127264 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.127287 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.127297 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.127306 | orchestrator | 2025-09-19 07:05:14.127316 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-19 07:05:14.127326 | orchestrator | Friday 19 September 2025 07:03:40 +0000 (0:00:01.046) 0:02:14.054 ****** 2025-09-19 07:05:14.127335 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.127345 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.127354 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.127364 | orchestrator | 2025-09-19 07:05:14.127374 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-19 07:05:14.127383 | orchestrator | Friday 19 September 2025 07:03:40 +0000 (0:00:00.349) 0:02:14.403 ****** 2025-09-19 07:05:14.127393 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.127408 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.127418 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.127427 | orchestrator | 2025-09-19 07:05:14.127437 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-19 07:05:14.127447 | orchestrator | Friday 19 September 2025 07:03:41 +0000 (0:00:00.661) 0:02:15.064 ****** 2025-09-19 07:05:14.127456 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.127466 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.127475 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.127484 | orchestrator | 2025-09-19 07:05:14.127494 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-19 07:05:14.127504 | orchestrator | Friday 19 September 2025 07:03:42 +0000 (0:00:00.725) 0:02:15.790 ****** 2025-09-19 07:05:14.127513 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.127523 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.127532 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.127542 | orchestrator | 2025-09-19 07:05:14.127551 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-19 07:05:14.127561 | orchestrator | Friday 19 September 2025 07:03:43 +0000 (0:00:01.090) 0:02:16.881 ****** 2025-09-19 07:05:14.127570 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:05:14.127580 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:05:14.127589 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:05:14.127599 | orchestrator | 2025-09-19 07:05:14.127608 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-19 07:05:14.127618 | orchestrator | Friday 19 September 2025 07:03:44 +0000 (0:00:00.825) 0:02:17.706 ****** 2025-09-19 07:05:14.127627 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.127637 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.127646 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.127656 | orchestrator | 2025-09-19 07:05:14.127665 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-19 07:05:14.127675 | orchestrator | Friday 19 September 2025 07:03:44 +0000 (0:00:00.313) 0:02:18.020 ****** 2025-09-19 07:05:14.127684 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.127694 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.127703 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.127713 | orchestrator | 2025-09-19 07:05:14.127722 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-19 07:05:14.127732 | orchestrator | Friday 19 September 2025 07:03:44 +0000 (0:00:00.284) 0:02:18.305 ****** 2025-09-19 07:05:14.127742 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.127751 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.127761 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.127770 | orchestrator | 2025-09-19 07:05:14.127780 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-19 07:05:14.127789 | orchestrator | Friday 19 September 2025 07:03:45 +0000 (0:00:00.816) 0:02:19.121 ****** 2025-09-19 07:05:14.127799 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.127808 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.127818 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.127827 | orchestrator | 2025-09-19 07:05:14.127837 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-19 07:05:14.127846 | orchestrator | Friday 19 September 2025 07:03:46 +0000 (0:00:00.665) 0:02:19.786 ****** 2025-09-19 07:05:14.127856 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 07:05:14.127866 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 07:05:14.127876 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 07:05:14.127885 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 07:05:14.127901 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 07:05:14.127910 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 07:05:14.127920 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 07:05:14.127929 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 07:05:14.127939 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 07:05:14.127954 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 07:05:14.127964 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-19 07:05:14.127973 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 07:05:14.127983 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 07:05:14.127992 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-19 07:05:14.128002 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 07:05:14.128011 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 07:05:14.128021 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 07:05:14.128031 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 07:05:14.128040 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 07:05:14.128050 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 07:05:14.128060 | orchestrator | 2025-09-19 07:05:14.128583 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-19 07:05:14.128599 | orchestrator | 2025-09-19 07:05:14.128607 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-19 07:05:14.128615 | orchestrator | Friday 19 September 2025 07:03:49 +0000 (0:00:03.236) 0:02:23.022 ****** 2025-09-19 07:05:14.128623 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:05:14.128631 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:05:14.128639 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:05:14.128646 | orchestrator | 2025-09-19 07:05:14.128654 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-19 07:05:14.128662 | orchestrator | Friday 19 September 2025 07:03:50 +0000 (0:00:00.506) 0:02:23.529 ****** 2025-09-19 07:05:14.128670 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:05:14.128677 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:05:14.128685 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:05:14.128693 | orchestrator | 2025-09-19 07:05:14.128700 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-19 07:05:14.128708 | orchestrator | Friday 19 September 2025 07:03:50 +0000 (0:00:00.711) 0:02:24.241 ****** 2025-09-19 07:05:14.128716 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:05:14.128724 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:05:14.128731 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:05:14.128739 | orchestrator | 2025-09-19 07:05:14.128747 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-19 07:05:14.128755 | orchestrator | Friday 19 September 2025 07:03:51 +0000 (0:00:00.343) 0:02:24.584 ****** 2025-09-19 07:05:14.128762 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:05:14.128770 | orchestrator | 2025-09-19 07:05:14.128778 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-19 07:05:14.128785 | orchestrator | Friday 19 September 2025 07:03:51 +0000 (0:00:00.714) 0:02:25.298 ****** 2025-09-19 07:05:14.128801 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:05:14.128809 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:05:14.128817 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:05:14.128825 | orchestrator | 2025-09-19 07:05:14.128832 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-19 07:05:14.128840 | orchestrator | Friday 19 September 2025 07:03:52 +0000 (0:00:00.252) 0:02:25.551 ****** 2025-09-19 07:05:14.128848 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:05:14.128856 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:05:14.128864 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:05:14.128871 | orchestrator | 2025-09-19 07:05:14.128879 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-19 07:05:14.128890 | orchestrator | Friday 19 September 2025 07:03:52 +0000 (0:00:00.226) 0:02:25.777 ****** 2025-09-19 07:05:14.128898 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:05:14.128906 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:05:14.128913 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:05:14.128921 | orchestrator | 2025-09-19 07:05:14.128929 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-19 07:05:14.128937 | orchestrator | Friday 19 September 2025 07:03:52 +0000 (0:00:00.276) 0:02:26.054 ****** 2025-09-19 07:05:14.128944 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:05:14.128952 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:05:14.128960 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:05:14.128968 | orchestrator | 2025-09-19 07:05:14.128975 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-19 07:05:14.128983 | orchestrator | Friday 19 September 2025 07:03:53 +0000 (0:00:00.774) 0:02:26.829 ****** 2025-09-19 07:05:14.128991 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:05:14.128999 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:05:14.129006 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:05:14.129014 | orchestrator | 2025-09-19 07:05:14.129022 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-19 07:05:14.129030 | orchestrator | Friday 19 September 2025 07:03:54 +0000 (0:00:01.044) 0:02:27.874 ****** 2025-09-19 07:05:14.129037 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:05:14.129045 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:05:14.129053 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:05:14.129060 | orchestrator | 2025-09-19 07:05:14.129068 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-19 07:05:14.129076 | orchestrator | Friday 19 September 2025 07:03:55 +0000 (0:00:01.273) 0:02:29.148 ****** 2025-09-19 07:05:14.129083 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:05:14.129091 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:05:14.129099 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:05:14.129107 | orchestrator | 2025-09-19 07:05:14.129121 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-19 07:05:14.129130 | orchestrator | 2025-09-19 07:05:14.129137 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-19 07:05:14.129145 | orchestrator | Friday 19 September 2025 07:04:08 +0000 (0:00:12.736) 0:02:41.885 ****** 2025-09-19 07:05:14.129153 | orchestrator | ok: [testbed-manager] 2025-09-19 07:05:14.129161 | orchestrator | 2025-09-19 07:05:14.129168 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-19 07:05:14.129176 | orchestrator | Friday 19 September 2025 07:04:09 +0000 (0:00:00.720) 0:02:42.605 ****** 2025-09-19 07:05:14.129184 | orchestrator | changed: [testbed-manager] 2025-09-19 07:05:14.129192 | orchestrator | 2025-09-19 07:05:14.129200 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 07:05:14.129207 | orchestrator | Friday 19 September 2025 07:04:09 +0000 (0:00:00.425) 0:02:43.031 ****** 2025-09-19 07:05:14.129215 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 07:05:14.129223 | orchestrator | 2025-09-19 07:05:14.129231 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 07:05:14.129243 | orchestrator | Friday 19 September 2025 07:04:10 +0000 (0:00:00.558) 0:02:43.589 ****** 2025-09-19 07:05:14.129251 | orchestrator | changed: [testbed-manager] 2025-09-19 07:05:14.129258 | orchestrator | 2025-09-19 07:05:14.129266 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-19 07:05:14.129288 | orchestrator | Friday 19 September 2025 07:04:11 +0000 (0:00:00.897) 0:02:44.487 ****** 2025-09-19 07:05:14.129296 | orchestrator | changed: [testbed-manager] 2025-09-19 07:05:14.129304 | orchestrator | 2025-09-19 07:05:14.129312 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-19 07:05:14.129320 | orchestrator | Friday 19 September 2025 07:04:11 +0000 (0:00:00.571) 0:02:45.059 ****** 2025-09-19 07:05:14.129328 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 07:05:14.129335 | orchestrator | 2025-09-19 07:05:14.129343 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-19 07:05:14.129351 | orchestrator | Friday 19 September 2025 07:04:13 +0000 (0:00:01.453) 0:02:46.512 ****** 2025-09-19 07:05:14.129359 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 07:05:14.129374 | orchestrator | 2025-09-19 07:05:14.129383 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-19 07:05:14.129390 | orchestrator | Friday 19 September 2025 07:04:13 +0000 (0:00:00.762) 0:02:47.275 ****** 2025-09-19 07:05:14.129398 | orchestrator | changed: [testbed-manager] 2025-09-19 07:05:14.129406 | orchestrator | 2025-09-19 07:05:14.129413 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-19 07:05:14.129421 | orchestrator | Friday 19 September 2025 07:04:14 +0000 (0:00:00.462) 0:02:47.737 ****** 2025-09-19 07:05:14.129429 | orchestrator | changed: [testbed-manager] 2025-09-19 07:05:14.129437 | orchestrator | 2025-09-19 07:05:14.129445 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-19 07:05:14.129453 | orchestrator | 2025-09-19 07:05:14.129460 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-19 07:05:14.129468 | orchestrator | Friday 19 September 2025 07:04:14 +0000 (0:00:00.694) 0:02:48.432 ****** 2025-09-19 07:05:14.129476 | orchestrator | ok: [testbed-manager] 2025-09-19 07:05:14.129484 | orchestrator | 2025-09-19 07:05:14.129492 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-19 07:05:14.129499 | orchestrator | Friday 19 September 2025 07:04:15 +0000 (0:00:00.116) 0:02:48.548 ****** 2025-09-19 07:05:14.129507 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 07:05:14.129515 | orchestrator | 2025-09-19 07:05:14.129523 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-19 07:05:14.129531 | orchestrator | Friday 19 September 2025 07:04:15 +0000 (0:00:00.268) 0:02:48.817 ****** 2025-09-19 07:05:14.129539 | orchestrator | ok: [testbed-manager] 2025-09-19 07:05:14.129546 | orchestrator | 2025-09-19 07:05:14.129554 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-19 07:05:14.129562 | orchestrator | Friday 19 September 2025 07:04:16 +0000 (0:00:00.703) 0:02:49.521 ****** 2025-09-19 07:05:14.129576 | orchestrator | ok: [testbed-manager] 2025-09-19 07:05:14.129584 | orchestrator | 2025-09-19 07:05:14.129592 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-19 07:05:14.129600 | orchestrator | Friday 19 September 2025 07:04:17 +0000 (0:00:01.911) 0:02:51.432 ****** 2025-09-19 07:05:14.129607 | orchestrator | changed: [testbed-manager] 2025-09-19 07:05:14.129615 | orchestrator | 2025-09-19 07:05:14.129623 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-19 07:05:14.129631 | orchestrator | Friday 19 September 2025 07:04:18 +0000 (0:00:00.715) 0:02:52.148 ****** 2025-09-19 07:05:14.129639 | orchestrator | ok: [testbed-manager] 2025-09-19 07:05:14.129646 | orchestrator | 2025-09-19 07:05:14.129654 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-19 07:05:14.129662 | orchestrator | Friday 19 September 2025 07:04:19 +0000 (0:00:00.399) 0:02:52.547 ****** 2025-09-19 07:05:14.129675 | orchestrator | changed: [testbed-manager] 2025-09-19 07:05:14.129683 | orchestrator | 2025-09-19 07:05:14.129691 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-19 07:05:14.129698 | orchestrator | Friday 19 September 2025 07:04:26 +0000 (0:00:07.440) 0:02:59.988 ****** 2025-09-19 07:05:14.129706 | orchestrator | changed: [testbed-manager] 2025-09-19 07:05:14.129714 | orchestrator | 2025-09-19 07:05:14.129722 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-19 07:05:14.129730 | orchestrator | Friday 19 September 2025 07:04:41 +0000 (0:00:15.449) 0:03:15.437 ****** 2025-09-19 07:05:14.129738 | orchestrator | ok: [testbed-manager] 2025-09-19 07:05:14.129746 | orchestrator | 2025-09-19 07:05:14.129753 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-19 07:05:14.129761 | orchestrator | 2025-09-19 07:05:14.129769 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-19 07:05:14.129781 | orchestrator | Friday 19 September 2025 07:04:42 +0000 (0:00:00.491) 0:03:15.929 ****** 2025-09-19 07:05:14.129790 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.129797 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.129805 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.129813 | orchestrator | 2025-09-19 07:05:14.129821 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-19 07:05:14.129828 | orchestrator | Friday 19 September 2025 07:04:42 +0000 (0:00:00.281) 0:03:16.210 ****** 2025-09-19 07:05:14.129836 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.129844 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.129852 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.129859 | orchestrator | 2025-09-19 07:05:14.129867 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-19 07:05:14.129875 | orchestrator | Friday 19 September 2025 07:04:43 +0000 (0:00:00.459) 0:03:16.670 ****** 2025-09-19 07:05:14.129883 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:05:14.129890 | orchestrator | 2025-09-19 07:05:14.129898 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-19 07:05:14.129906 | orchestrator | Friday 19 September 2025 07:04:44 +0000 (0:00:00.846) 0:03:17.516 ****** 2025-09-19 07:05:14.129913 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.129921 | orchestrator | 2025-09-19 07:05:14.129929 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-19 07:05:14.129937 | orchestrator | Friday 19 September 2025 07:04:44 +0000 (0:00:00.207) 0:03:17.723 ****** 2025-09-19 07:05:14.129945 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.129952 | orchestrator | 2025-09-19 07:05:14.129960 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-19 07:05:14.129968 | orchestrator | Friday 19 September 2025 07:04:44 +0000 (0:00:00.203) 0:03:17.927 ****** 2025-09-19 07:05:14.129976 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.129984 | orchestrator | 2025-09-19 07:05:14.129992 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-19 07:05:14.130000 | orchestrator | Friday 19 September 2025 07:04:44 +0000 (0:00:00.194) 0:03:18.121 ****** 2025-09-19 07:05:14.130008 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130048 | orchestrator | 2025-09-19 07:05:14.130058 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-19 07:05:14.130066 | orchestrator | Friday 19 September 2025 07:04:44 +0000 (0:00:00.166) 0:03:18.287 ****** 2025-09-19 07:05:14.130074 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130081 | orchestrator | 2025-09-19 07:05:14.130089 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-19 07:05:14.130097 | orchestrator | Friday 19 September 2025 07:04:45 +0000 (0:00:00.193) 0:03:18.481 ****** 2025-09-19 07:05:14.130105 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130119 | orchestrator | 2025-09-19 07:05:14.130127 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-19 07:05:14.130135 | orchestrator | Friday 19 September 2025 07:04:45 +0000 (0:00:00.207) 0:03:18.689 ****** 2025-09-19 07:05:14.130143 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130151 | orchestrator | 2025-09-19 07:05:14.130159 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-19 07:05:14.130166 | orchestrator | Friday 19 September 2025 07:04:45 +0000 (0:00:00.235) 0:03:18.925 ****** 2025-09-19 07:05:14.130174 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130182 | orchestrator | 2025-09-19 07:05:14.130190 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-19 07:05:14.130197 | orchestrator | Friday 19 September 2025 07:04:45 +0000 (0:00:00.264) 0:03:19.189 ****** 2025-09-19 07:05:14.130205 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130213 | orchestrator | 2025-09-19 07:05:14.130221 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-19 07:05:14.130228 | orchestrator | Friday 19 September 2025 07:04:45 +0000 (0:00:00.177) 0:03:19.367 ****** 2025-09-19 07:05:14.130236 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-19 07:05:14.130244 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-19 07:05:14.130252 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130260 | orchestrator | 2025-09-19 07:05:14.130271 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-19 07:05:14.130292 | orchestrator | Friday 19 September 2025 07:04:46 +0000 (0:00:00.607) 0:03:19.975 ****** 2025-09-19 07:05:14.130300 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130308 | orchestrator | 2025-09-19 07:05:14.130316 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-19 07:05:14.130324 | orchestrator | Friday 19 September 2025 07:04:46 +0000 (0:00:00.158) 0:03:20.133 ****** 2025-09-19 07:05:14.130332 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130340 | orchestrator | 2025-09-19 07:05:14.130348 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-19 07:05:14.130355 | orchestrator | Friday 19 September 2025 07:04:46 +0000 (0:00:00.187) 0:03:20.321 ****** 2025-09-19 07:05:14.130363 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130371 | orchestrator | 2025-09-19 07:05:14.130378 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-19 07:05:14.130386 | orchestrator | Friday 19 September 2025 07:04:47 +0000 (0:00:00.191) 0:03:20.512 ****** 2025-09-19 07:05:14.130394 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130402 | orchestrator | 2025-09-19 07:05:14.130410 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-19 07:05:14.130417 | orchestrator | Friday 19 September 2025 07:04:47 +0000 (0:00:00.235) 0:03:20.748 ****** 2025-09-19 07:05:14.130425 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130433 | orchestrator | 2025-09-19 07:05:14.130441 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-19 07:05:14.130449 | orchestrator | Friday 19 September 2025 07:04:47 +0000 (0:00:00.393) 0:03:21.141 ****** 2025-09-19 07:05:14.130457 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130464 | orchestrator | 2025-09-19 07:05:14.130472 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-19 07:05:14.130485 | orchestrator | Friday 19 September 2025 07:04:47 +0000 (0:00:00.220) 0:03:21.361 ****** 2025-09-19 07:05:14.130493 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130501 | orchestrator | 2025-09-19 07:05:14.130509 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-19 07:05:14.130517 | orchestrator | Friday 19 September 2025 07:04:48 +0000 (0:00:00.196) 0:03:21.558 ****** 2025-09-19 07:05:14.130524 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130532 | orchestrator | 2025-09-19 07:05:14.130540 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-19 07:05:14.130553 | orchestrator | Friday 19 September 2025 07:04:48 +0000 (0:00:00.324) 0:03:21.883 ****** 2025-09-19 07:05:14.130560 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130568 | orchestrator | 2025-09-19 07:05:14.130576 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-19 07:05:14.130584 | orchestrator | Friday 19 September 2025 07:04:48 +0000 (0:00:00.217) 0:03:22.101 ****** 2025-09-19 07:05:14.130592 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130600 | orchestrator | 2025-09-19 07:05:14.130607 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-19 07:05:14.130615 | orchestrator | Friday 19 September 2025 07:04:48 +0000 (0:00:00.173) 0:03:22.275 ****** 2025-09-19 07:05:14.130623 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130631 | orchestrator | 2025-09-19 07:05:14.130639 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-19 07:05:14.130647 | orchestrator | Friday 19 September 2025 07:04:49 +0000 (0:00:00.198) 0:03:22.473 ****** 2025-09-19 07:05:14.130654 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-19 07:05:14.130662 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-19 07:05:14.130670 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-19 07:05:14.130678 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-19 07:05:14.130686 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130694 | orchestrator | 2025-09-19 07:05:14.130702 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-19 07:05:14.130710 | orchestrator | Friday 19 September 2025 07:04:49 +0000 (0:00:00.815) 0:03:23.289 ****** 2025-09-19 07:05:14.130717 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130725 | orchestrator | 2025-09-19 07:05:14.130733 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-19 07:05:14.130741 | orchestrator | Friday 19 September 2025 07:04:50 +0000 (0:00:00.207) 0:03:23.497 ****** 2025-09-19 07:05:14.130748 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130756 | orchestrator | 2025-09-19 07:05:14.130764 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-19 07:05:14.130772 | orchestrator | Friday 19 September 2025 07:04:50 +0000 (0:00:00.196) 0:03:23.693 ****** 2025-09-19 07:05:14.130780 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130787 | orchestrator | 2025-09-19 07:05:14.130795 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-19 07:05:14.130803 | orchestrator | Friday 19 September 2025 07:04:50 +0000 (0:00:00.201) 0:03:23.894 ****** 2025-09-19 07:05:14.130811 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130819 | orchestrator | 2025-09-19 07:05:14.130827 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-19 07:05:14.130835 | orchestrator | Friday 19 September 2025 07:04:50 +0000 (0:00:00.190) 0:03:24.085 ****** 2025-09-19 07:05:14.130843 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-19 07:05:14.130851 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-19 07:05:14.130858 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130866 | orchestrator | 2025-09-19 07:05:14.130874 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-19 07:05:14.130882 | orchestrator | Friday 19 September 2025 07:04:51 +0000 (0:00:00.368) 0:03:24.454 ****** 2025-09-19 07:05:14.130890 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.130901 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.130909 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.130917 | orchestrator | 2025-09-19 07:05:14.130925 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-19 07:05:14.130933 | orchestrator | Friday 19 September 2025 07:04:51 +0000 (0:00:00.330) 0:03:24.785 ****** 2025-09-19 07:05:14.130945 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.130953 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.130961 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.130968 | orchestrator | 2025-09-19 07:05:14.130976 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-19 07:05:14.130984 | orchestrator | 2025-09-19 07:05:14.130992 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-19 07:05:14.131000 | orchestrator | Friday 19 September 2025 07:04:52 +0000 (0:00:01.039) 0:03:25.825 ****** 2025-09-19 07:05:14.131008 | orchestrator | ok: [testbed-manager] 2025-09-19 07:05:14.131015 | orchestrator | 2025-09-19 07:05:14.131023 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-19 07:05:14.131031 | orchestrator | Friday 19 September 2025 07:04:52 +0000 (0:00:00.197) 0:03:26.022 ****** 2025-09-19 07:05:14.131039 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 07:05:14.131046 | orchestrator | 2025-09-19 07:05:14.131054 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-19 07:05:14.131062 | orchestrator | Friday 19 September 2025 07:04:52 +0000 (0:00:00.227) 0:03:26.250 ****** 2025-09-19 07:05:14.131070 | orchestrator | changed: [testbed-manager] 2025-09-19 07:05:14.131078 | orchestrator | 2025-09-19 07:05:14.131085 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-19 07:05:14.131093 | orchestrator | 2025-09-19 07:05:14.131101 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-19 07:05:14.131113 | orchestrator | Friday 19 September 2025 07:04:57 +0000 (0:00:05.113) 0:03:31.363 ****** 2025-09-19 07:05:14.131121 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:05:14.131129 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:05:14.131137 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:05:14.131145 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:05:14.131152 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:05:14.131160 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:05:14.131168 | orchestrator | 2025-09-19 07:05:14.131176 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-19 07:05:14.131184 | orchestrator | Friday 19 September 2025 07:04:58 +0000 (0:00:00.649) 0:03:32.013 ****** 2025-09-19 07:05:14.131192 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 07:05:14.131199 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 07:05:14.131207 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 07:05:14.131215 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 07:05:14.131223 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 07:05:14.131230 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 07:05:14.131238 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 07:05:14.131246 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 07:05:14.131254 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 07:05:14.131262 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 07:05:14.131269 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 07:05:14.131310 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 07:05:14.131318 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 07:05:14.131326 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 07:05:14.131334 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 07:05:14.131347 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 07:05:14.131355 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 07:05:14.131362 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 07:05:14.131370 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 07:05:14.131378 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 07:05:14.131385 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 07:05:14.131393 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 07:05:14.131401 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 07:05:14.131409 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 07:05:14.131416 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 07:05:14.131424 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 07:05:14.131435 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 07:05:14.131443 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 07:05:14.131451 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 07:05:14.131458 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 07:05:14.131466 | orchestrator | 2025-09-19 07:05:14.131474 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-19 07:05:14.131482 | orchestrator | Friday 19 September 2025 07:05:11 +0000 (0:00:13.361) 0:03:45.374 ****** 2025-09-19 07:05:14.131489 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:05:14.131497 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:05:14.131505 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:05:14.131513 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.131520 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.131528 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.131536 | orchestrator | 2025-09-19 07:05:14.131543 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-19 07:05:14.131551 | orchestrator | Friday 19 September 2025 07:05:12 +0000 (0:00:00.506) 0:03:45.880 ****** 2025-09-19 07:05:14.131559 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:05:14.131567 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:05:14.131574 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:05:14.131582 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:05:14.131590 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:05:14.131597 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:05:14.131605 | orchestrator | 2025-09-19 07:05:14.131613 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:05:14.131625 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:05:14.131635 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-19 07:05:14.131643 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 07:05:14.131651 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 07:05:14.131659 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 07:05:14.131671 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 07:05:14.131679 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 07:05:14.131687 | orchestrator | 2025-09-19 07:05:14.131695 | orchestrator | 2025-09-19 07:05:14.131703 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:05:14.131711 | orchestrator | Friday 19 September 2025 07:05:12 +0000 (0:00:00.368) 0:03:46.248 ****** 2025-09-19 07:05:14.131719 | orchestrator | =============================================================================== 2025-09-19 07:05:14.131726 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.01s 2025-09-19 07:05:14.131735 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.24s 2025-09-19 07:05:14.131743 | orchestrator | kubectl : Install required packages ------------------------------------ 15.45s 2025-09-19 07:05:14.131750 | orchestrator | Manage labels ---------------------------------------------------------- 13.36s 2025-09-19 07:05:14.131757 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.74s 2025-09-19 07:05:14.131763 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.44s 2025-09-19 07:05:14.131770 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.10s 2025-09-19 07:05:14.131776 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.11s 2025-09-19 07:05:14.131783 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.24s 2025-09-19 07:05:14.131790 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.95s 2025-09-19 07:05:14.131797 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.76s 2025-09-19 07:05:14.131803 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.38s 2025-09-19 07:05:14.131810 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.36s 2025-09-19 07:05:14.131817 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.18s 2025-09-19 07:05:14.131823 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.91s 2025-09-19 07:05:14.131830 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.89s 2025-09-19 07:05:14.131836 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.86s 2025-09-19 07:05:14.131843 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.73s 2025-09-19 07:05:14.131852 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 1.73s 2025-09-19 07:05:14.131859 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 1.55s 2025-09-19 07:05:14.131866 | orchestrator | 2025-09-19 07:05:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:17.215972 | orchestrator | 2025-09-19 07:05:17 | INFO  | Task ff042b7c-db46-44ab-ab70-1941664426de is in state STARTED 2025-09-19 07:05:17.216055 | orchestrator | 2025-09-19 07:05:17 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:17.216085 | orchestrator | 2025-09-19 07:05:17 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:17.216107 | orchestrator | 2025-09-19 07:05:17 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:17.216123 | orchestrator | 2025-09-19 07:05:17 | INFO  | Task 17ac7666-5d2b-4377-a6e8-c8ef23416d4d is in state STARTED 2025-09-19 07:05:17.216140 | orchestrator | 2025-09-19 07:05:17 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:17.216192 | orchestrator | 2025-09-19 07:05:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:20.228777 | orchestrator | 2025-09-19 07:05:20 | INFO  | Task ff042b7c-db46-44ab-ab70-1941664426de is in state STARTED 2025-09-19 07:05:20.228860 | orchestrator | 2025-09-19 07:05:20 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:20.228874 | orchestrator | 2025-09-19 07:05:20 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:20.228885 | orchestrator | 2025-09-19 07:05:20 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:20.231639 | orchestrator | 2025-09-19 07:05:20 | INFO  | Task 17ac7666-5d2b-4377-a6e8-c8ef23416d4d is in state STARTED 2025-09-19 07:05:20.232839 | orchestrator | 2025-09-19 07:05:20 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:20.233358 | orchestrator | 2025-09-19 07:05:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:23.266385 | orchestrator | 2025-09-19 07:05:23 | INFO  | Task ff042b7c-db46-44ab-ab70-1941664426de is in state SUCCESS 2025-09-19 07:05:23.266670 | orchestrator | 2025-09-19 07:05:23 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:23.267176 | orchestrator | 2025-09-19 07:05:23 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:23.268171 | orchestrator | 2025-09-19 07:05:23 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:23.269854 | orchestrator | 2025-09-19 07:05:23 | INFO  | Task 17ac7666-5d2b-4377-a6e8-c8ef23416d4d is in state STARTED 2025-09-19 07:05:23.270903 | orchestrator | 2025-09-19 07:05:23 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:23.270932 | orchestrator | 2025-09-19 07:05:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:26.309694 | orchestrator | 2025-09-19 07:05:26 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:26.314589 | orchestrator | 2025-09-19 07:05:26 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:26.315326 | orchestrator | 2025-09-19 07:05:26 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:26.316230 | orchestrator | 2025-09-19 07:05:26 | INFO  | Task 17ac7666-5d2b-4377-a6e8-c8ef23416d4d is in state SUCCESS 2025-09-19 07:05:26.319652 | orchestrator | 2025-09-19 07:05:26 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:26.319688 | orchestrator | 2025-09-19 07:05:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:29.373997 | orchestrator | 2025-09-19 07:05:29 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:29.375219 | orchestrator | 2025-09-19 07:05:29 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:29.376601 | orchestrator | 2025-09-19 07:05:29 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:29.377999 | orchestrator | 2025-09-19 07:05:29 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:29.378069 | orchestrator | 2025-09-19 07:05:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:32.419352 | orchestrator | 2025-09-19 07:05:32 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:32.419941 | orchestrator | 2025-09-19 07:05:32 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:32.420847 | orchestrator | 2025-09-19 07:05:32 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:32.421854 | orchestrator | 2025-09-19 07:05:32 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:32.421933 | orchestrator | 2025-09-19 07:05:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:35.470846 | orchestrator | 2025-09-19 07:05:35 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:35.471650 | orchestrator | 2025-09-19 07:05:35 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:35.474354 | orchestrator | 2025-09-19 07:05:35 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:35.476121 | orchestrator | 2025-09-19 07:05:35 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:35.476146 | orchestrator | 2025-09-19 07:05:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:38.518500 | orchestrator | 2025-09-19 07:05:38 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:38.519396 | orchestrator | 2025-09-19 07:05:38 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:38.520284 | orchestrator | 2025-09-19 07:05:38 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:38.521144 | orchestrator | 2025-09-19 07:05:38 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:38.521183 | orchestrator | 2025-09-19 07:05:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:41.612759 | orchestrator | 2025-09-19 07:05:41 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:41.614368 | orchestrator | 2025-09-19 07:05:41 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:41.617489 | orchestrator | 2025-09-19 07:05:41 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:41.619695 | orchestrator | 2025-09-19 07:05:41 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:41.619721 | orchestrator | 2025-09-19 07:05:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:44.673771 | orchestrator | 2025-09-19 07:05:44 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:44.674455 | orchestrator | 2025-09-19 07:05:44 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:44.675926 | orchestrator | 2025-09-19 07:05:44 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:44.677185 | orchestrator | 2025-09-19 07:05:44 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:44.677331 | orchestrator | 2025-09-19 07:05:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:47.720487 | orchestrator | 2025-09-19 07:05:47 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:47.726229 | orchestrator | 2025-09-19 07:05:47 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:47.728536 | orchestrator | 2025-09-19 07:05:47 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:47.730807 | orchestrator | 2025-09-19 07:05:47 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:47.731443 | orchestrator | 2025-09-19 07:05:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:50.781499 | orchestrator | 2025-09-19 07:05:50 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:50.783407 | orchestrator | 2025-09-19 07:05:50 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:50.786000 | orchestrator | 2025-09-19 07:05:50 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:50.787963 | orchestrator | 2025-09-19 07:05:50 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:50.787997 | orchestrator | 2025-09-19 07:05:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:53.827590 | orchestrator | 2025-09-19 07:05:53 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:53.829652 | orchestrator | 2025-09-19 07:05:53 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:53.832439 | orchestrator | 2025-09-19 07:05:53 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:53.834645 | orchestrator | 2025-09-19 07:05:53 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:53.834685 | orchestrator | 2025-09-19 07:05:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:56.884046 | orchestrator | 2025-09-19 07:05:56 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:56.884148 | orchestrator | 2025-09-19 07:05:56 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:56.884164 | orchestrator | 2025-09-19 07:05:56 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:56.884484 | orchestrator | 2025-09-19 07:05:56 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:56.884509 | orchestrator | 2025-09-19 07:05:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:05:59.930183 | orchestrator | 2025-09-19 07:05:59 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:05:59.933400 | orchestrator | 2025-09-19 07:05:59 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:05:59.933444 | orchestrator | 2025-09-19 07:05:59 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:05:59.936599 | orchestrator | 2025-09-19 07:05:59 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:05:59.936625 | orchestrator | 2025-09-19 07:05:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:02.999006 | orchestrator | 2025-09-19 07:06:02 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:03.000639 | orchestrator | 2025-09-19 07:06:02 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:06:03.002717 | orchestrator | 2025-09-19 07:06:03 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:03.006671 | orchestrator | 2025-09-19 07:06:03 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:03.006966 | orchestrator | 2025-09-19 07:06:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:06.077179 | orchestrator | 2025-09-19 07:06:06 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:06.079272 | orchestrator | 2025-09-19 07:06:06 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:06:06.083308 | orchestrator | 2025-09-19 07:06:06 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:06.085076 | orchestrator | 2025-09-19 07:06:06 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:06.085874 | orchestrator | 2025-09-19 07:06:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:09.135932 | orchestrator | 2025-09-19 07:06:09 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:09.136135 | orchestrator | 2025-09-19 07:06:09 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:06:09.137069 | orchestrator | 2025-09-19 07:06:09 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:09.137933 | orchestrator | 2025-09-19 07:06:09 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:09.138120 | orchestrator | 2025-09-19 07:06:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:12.176775 | orchestrator | 2025-09-19 07:06:12 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:12.177573 | orchestrator | 2025-09-19 07:06:12 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:06:12.178639 | orchestrator | 2025-09-19 07:06:12 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:12.179274 | orchestrator | 2025-09-19 07:06:12 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:12.179300 | orchestrator | 2025-09-19 07:06:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:15.217217 | orchestrator | 2025-09-19 07:06:15 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:15.218502 | orchestrator | 2025-09-19 07:06:15 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:06:15.218680 | orchestrator | 2025-09-19 07:06:15 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:15.218837 | orchestrator | 2025-09-19 07:06:15 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:15.219854 | orchestrator | 2025-09-19 07:06:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:18.256600 | orchestrator | 2025-09-19 07:06:18 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:18.258126 | orchestrator | 2025-09-19 07:06:18 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:06:18.259548 | orchestrator | 2025-09-19 07:06:18 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:18.260874 | orchestrator | 2025-09-19 07:06:18 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:18.260899 | orchestrator | 2025-09-19 07:06:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:21.302689 | orchestrator | 2025-09-19 07:06:21 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:21.304623 | orchestrator | 2025-09-19 07:06:21 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:06:21.308867 | orchestrator | 2025-09-19 07:06:21 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:21.309670 | orchestrator | 2025-09-19 07:06:21 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:21.309921 | orchestrator | 2025-09-19 07:06:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:24.345184 | orchestrator | 2025-09-19 07:06:24 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:24.347173 | orchestrator | 2025-09-19 07:06:24 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:06:24.348011 | orchestrator | 2025-09-19 07:06:24 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:24.350472 | orchestrator | 2025-09-19 07:06:24 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:24.350498 | orchestrator | 2025-09-19 07:06:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:27.380350 | orchestrator | 2025-09-19 07:06:27 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:27.381495 | orchestrator | 2025-09-19 07:06:27 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state STARTED 2025-09-19 07:06:27.382212 | orchestrator | 2025-09-19 07:06:27 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:27.383131 | orchestrator | 2025-09-19 07:06:27 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:27.383157 | orchestrator | 2025-09-19 07:06:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:30.416282 | orchestrator | 2025-09-19 07:06:30 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:30.418744 | orchestrator | 2025-09-19 07:06:30 | INFO  | Task 77748b26-21ba-40a6-96a7-75106b0c5cfe is in state SUCCESS 2025-09-19 07:06:30.420952 | orchestrator | 2025-09-19 07:06:30.420998 | orchestrator | 2025-09-19 07:06:30.421009 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-19 07:06:30.421022 | orchestrator | 2025-09-19 07:06:30.421033 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 07:06:30.421044 | orchestrator | Friday 19 September 2025 07:05:17 +0000 (0:00:00.187) 0:00:00.187 ****** 2025-09-19 07:06:30.421056 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 07:06:30.421068 | orchestrator | 2025-09-19 07:06:30.421079 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 07:06:30.421089 | orchestrator | Friday 19 September 2025 07:05:18 +0000 (0:00:00.850) 0:00:01.038 ****** 2025-09-19 07:06:30.421101 | orchestrator | changed: [testbed-manager] 2025-09-19 07:06:30.421112 | orchestrator | 2025-09-19 07:06:30.421123 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-19 07:06:30.421134 | orchestrator | Friday 19 September 2025 07:05:20 +0000 (0:00:01.408) 0:00:02.446 ****** 2025-09-19 07:06:30.421145 | orchestrator | changed: [testbed-manager] 2025-09-19 07:06:30.421156 | orchestrator | 2025-09-19 07:06:30.421167 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:06:30.421178 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:06:30.421190 | orchestrator | 2025-09-19 07:06:30.421201 | orchestrator | 2025-09-19 07:06:30.421212 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:06:30.421249 | orchestrator | Friday 19 September 2025 07:05:20 +0000 (0:00:00.609) 0:00:03.055 ****** 2025-09-19 07:06:30.421260 | orchestrator | =============================================================================== 2025-09-19 07:06:30.421271 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.41s 2025-09-19 07:06:30.421290 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.85s 2025-09-19 07:06:30.421301 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.61s 2025-09-19 07:06:30.421312 | orchestrator | 2025-09-19 07:06:30.421323 | orchestrator | 2025-09-19 07:06:30.421335 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-19 07:06:30.421346 | orchestrator | 2025-09-19 07:06:30.421357 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-19 07:06:30.421368 | orchestrator | Friday 19 September 2025 07:05:17 +0000 (0:00:00.234) 0:00:00.234 ****** 2025-09-19 07:06:30.421398 | orchestrator | ok: [testbed-manager] 2025-09-19 07:06:30.421410 | orchestrator | 2025-09-19 07:06:30.421421 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-19 07:06:30.421432 | orchestrator | Friday 19 September 2025 07:05:18 +0000 (0:00:00.680) 0:00:00.914 ****** 2025-09-19 07:06:30.421443 | orchestrator | ok: [testbed-manager] 2025-09-19 07:06:30.421454 | orchestrator | 2025-09-19 07:06:30.421464 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 07:06:30.421475 | orchestrator | Friday 19 September 2025 07:05:18 +0000 (0:00:00.491) 0:00:01.405 ****** 2025-09-19 07:06:30.421486 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 07:06:30.421497 | orchestrator | 2025-09-19 07:06:30.421508 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 07:06:30.421519 | orchestrator | Friday 19 September 2025 07:05:19 +0000 (0:00:00.690) 0:00:02.096 ****** 2025-09-19 07:06:30.421530 | orchestrator | changed: [testbed-manager] 2025-09-19 07:06:30.421543 | orchestrator | 2025-09-19 07:06:30.421555 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-19 07:06:30.421568 | orchestrator | Friday 19 September 2025 07:05:20 +0000 (0:00:01.435) 0:00:03.531 ****** 2025-09-19 07:06:30.421581 | orchestrator | changed: [testbed-manager] 2025-09-19 07:06:30.421593 | orchestrator | 2025-09-19 07:06:30.421605 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-19 07:06:30.421617 | orchestrator | Friday 19 September 2025 07:05:21 +0000 (0:00:00.702) 0:00:04.234 ****** 2025-09-19 07:06:30.421630 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 07:06:30.421643 | orchestrator | 2025-09-19 07:06:30.421655 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-19 07:06:30.421668 | orchestrator | Friday 19 September 2025 07:05:23 +0000 (0:00:01.400) 0:00:05.635 ****** 2025-09-19 07:06:30.421680 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 07:06:30.421692 | orchestrator | 2025-09-19 07:06:30.421705 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-19 07:06:30.421718 | orchestrator | Friday 19 September 2025 07:05:23 +0000 (0:00:00.755) 0:00:06.391 ****** 2025-09-19 07:06:30.421730 | orchestrator | ok: [testbed-manager] 2025-09-19 07:06:30.421743 | orchestrator | 2025-09-19 07:06:30.421756 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-19 07:06:30.421768 | orchestrator | Friday 19 September 2025 07:05:24 +0000 (0:00:00.339) 0:00:06.730 ****** 2025-09-19 07:06:30.421780 | orchestrator | ok: [testbed-manager] 2025-09-19 07:06:30.421793 | orchestrator | 2025-09-19 07:06:30.421805 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:06:30.421818 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:06:30.421830 | orchestrator | 2025-09-19 07:06:30.421842 | orchestrator | 2025-09-19 07:06:30.421855 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:06:30.421868 | orchestrator | Friday 19 September 2025 07:05:24 +0000 (0:00:00.287) 0:00:07.017 ****** 2025-09-19 07:06:30.421880 | orchestrator | =============================================================================== 2025-09-19 07:06:30.421892 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.44s 2025-09-19 07:06:30.421903 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.40s 2025-09-19 07:06:30.421914 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.76s 2025-09-19 07:06:30.421938 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.70s 2025-09-19 07:06:30.421949 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.69s 2025-09-19 07:06:30.421960 | orchestrator | Get home directory of operator user ------------------------------------- 0.68s 2025-09-19 07:06:30.421971 | orchestrator | Create .kube directory -------------------------------------------------- 0.49s 2025-09-19 07:06:30.421989 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.34s 2025-09-19 07:06:30.422000 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.29s 2025-09-19 07:06:30.422011 | orchestrator | 2025-09-19 07:06:30.422090 | orchestrator | 2025-09-19 07:06:30.422101 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-19 07:06:30.422112 | orchestrator | 2025-09-19 07:06:30.422123 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-19 07:06:30.422134 | orchestrator | Friday 19 September 2025 07:04:12 +0000 (0:00:00.259) 0:00:00.259 ****** 2025-09-19 07:06:30.422145 | orchestrator | ok: [localhost] => { 2025-09-19 07:06:30.422157 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-19 07:06:30.422169 | orchestrator | } 2025-09-19 07:06:30.422180 | orchestrator | 2025-09-19 07:06:30.422191 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-19 07:06:30.422201 | orchestrator | Friday 19 September 2025 07:04:12 +0000 (0:00:00.061) 0:00:00.321 ****** 2025-09-19 07:06:30.422214 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 1, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-19 07:06:30.422261 | orchestrator | ...ignoring 2025-09-19 07:06:30.422273 | orchestrator | 2025-09-19 07:06:30.422290 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-19 07:06:30.422301 | orchestrator | Friday 19 September 2025 07:04:16 +0000 (0:00:03.981) 0:00:04.302 ****** 2025-09-19 07:06:30.422312 | orchestrator | skipping: [localhost] 2025-09-19 07:06:30.422323 | orchestrator | 2025-09-19 07:06:30.422333 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-19 07:06:30.422344 | orchestrator | Friday 19 September 2025 07:04:16 +0000 (0:00:00.054) 0:00:04.356 ****** 2025-09-19 07:06:30.422355 | orchestrator | ok: [localhost] 2025-09-19 07:06:30.422366 | orchestrator | 2025-09-19 07:06:30.422377 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:06:30.422387 | orchestrator | 2025-09-19 07:06:30.422398 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:06:30.422409 | orchestrator | Friday 19 September 2025 07:04:17 +0000 (0:00:00.524) 0:00:04.880 ****** 2025-09-19 07:06:30.422420 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:30.422431 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:30.422442 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:30.422453 | orchestrator | 2025-09-19 07:06:30.422464 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:06:30.422474 | orchestrator | Friday 19 September 2025 07:04:17 +0000 (0:00:00.377) 0:00:05.258 ****** 2025-09-19 07:06:30.422485 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-19 07:06:30.422497 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-19 07:06:30.422508 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-19 07:06:30.422518 | orchestrator | 2025-09-19 07:06:30.422529 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-19 07:06:30.422540 | orchestrator | 2025-09-19 07:06:30.422551 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 07:06:30.422561 | orchestrator | Friday 19 September 2025 07:04:18 +0000 (0:00:00.558) 0:00:05.816 ****** 2025-09-19 07:06:30.422572 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:06:30.422584 | orchestrator | 2025-09-19 07:06:30.422594 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-19 07:06:30.422605 | orchestrator | Friday 19 September 2025 07:04:18 +0000 (0:00:00.499) 0:00:06.315 ****** 2025-09-19 07:06:30.422616 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:30.422627 | orchestrator | 2025-09-19 07:06:30.422637 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-19 07:06:30.422656 | orchestrator | Friday 19 September 2025 07:04:19 +0000 (0:00:00.979) 0:00:07.295 ****** 2025-09-19 07:06:30.422667 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:30.422678 | orchestrator | 2025-09-19 07:06:30.422689 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-19 07:06:30.422699 | orchestrator | Friday 19 September 2025 07:04:20 +0000 (0:00:00.383) 0:00:07.679 ****** 2025-09-19 07:06:30.422710 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:30.422721 | orchestrator | 2025-09-19 07:06:30.422732 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-19 07:06:30.422742 | orchestrator | Friday 19 September 2025 07:04:20 +0000 (0:00:00.485) 0:00:08.164 ****** 2025-09-19 07:06:30.422753 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:30.422764 | orchestrator | 2025-09-19 07:06:30.422775 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-19 07:06:30.422785 | orchestrator | Friday 19 September 2025 07:04:21 +0000 (0:00:00.552) 0:00:08.716 ****** 2025-09-19 07:06:30.422796 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:30.422807 | orchestrator | 2025-09-19 07:06:30.422818 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 07:06:30.422829 | orchestrator | Friday 19 September 2025 07:04:21 +0000 (0:00:00.739) 0:00:09.456 ****** 2025-09-19 07:06:30.422840 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:06:30.422850 | orchestrator | 2025-09-19 07:06:30.422861 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-19 07:06:30.422881 | orchestrator | Friday 19 September 2025 07:04:23 +0000 (0:00:01.169) 0:00:10.625 ****** 2025-09-19 07:06:30.422892 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:30.422903 | orchestrator | 2025-09-19 07:06:30.422914 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-19 07:06:30.422925 | orchestrator | Friday 19 September 2025 07:04:23 +0000 (0:00:00.848) 0:00:11.474 ****** 2025-09-19 07:06:30.422935 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:30.422946 | orchestrator | 2025-09-19 07:06:30.422957 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-19 07:06:30.422968 | orchestrator | Friday 19 September 2025 07:04:24 +0000 (0:00:00.589) 0:00:12.063 ****** 2025-09-19 07:06:30.422978 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:30.422989 | orchestrator | 2025-09-19 07:06:30.423000 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-19 07:06:30.423011 | orchestrator | Friday 19 September 2025 07:04:25 +0000 (0:00:00.539) 0:00:12.603 ****** 2025-09-19 07:06:30.423031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:06:30.423050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:06:30.423070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:06:30.423083 | orchestrator | 2025-09-19 07:06:30.423094 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-19 07:06:30.423105 | orchestrator | Friday 19 September 2025 07:04:26 +0000 (0:00:01.054) 0:00:13.658 ****** 2025-09-19 07:06:30.423125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:06:30.423139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:06:30.423273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:06:30.423301 | orchestrator | 2025-09-19 07:06:30.423312 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-19 07:06:30.423323 | orchestrator | Friday 19 September 2025 07:04:28 +0000 (0:00:02.880) 0:00:16.538 ****** 2025-09-19 07:06:30.423334 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 07:06:30.423345 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 07:06:30.423356 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 07:06:30.423367 | orchestrator | 2025-09-19 07:06:30.423378 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-19 07:06:30.423389 | orchestrator | Friday 19 September 2025 07:04:31 +0000 (0:00:02.295) 0:00:18.833 ****** 2025-09-19 07:06:30.423400 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 07:06:30.423411 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 07:06:30.423422 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 07:06:30.423432 | orchestrator | 2025-09-19 07:06:30.423443 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-19 07:06:30.423464 | orchestrator | Friday 19 September 2025 07:04:33 +0000 (0:00:01.824) 0:00:20.658 ****** 2025-09-19 07:06:30.423475 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 07:06:30.423486 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 07:06:30.423496 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 07:06:30.423507 | orchestrator | 2025-09-19 07:06:30.423518 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-19 07:06:30.423529 | orchestrator | Friday 19 September 2025 07:04:35 +0000 (0:00:01.941) 0:00:22.599 ****** 2025-09-19 07:06:30.423539 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 07:06:30.423550 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 07:06:30.423561 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 07:06:30.423571 | orchestrator | 2025-09-19 07:06:30.423582 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-19 07:06:30.423593 | orchestrator | Friday 19 September 2025 07:04:37 +0000 (0:00:02.125) 0:00:24.724 ****** 2025-09-19 07:06:30.423612 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 07:06:30.423623 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 07:06:30.423634 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 07:06:30.423645 | orchestrator | 2025-09-19 07:06:30.423659 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-19 07:06:30.423668 | orchestrator | Friday 19 September 2025 07:04:38 +0000 (0:00:01.494) 0:00:26.219 ****** 2025-09-19 07:06:30.423678 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 07:06:30.423688 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 07:06:30.423697 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 07:06:30.423707 | orchestrator | 2025-09-19 07:06:30.423716 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 07:06:30.423726 | orchestrator | Friday 19 September 2025 07:04:40 +0000 (0:00:01.662) 0:00:27.881 ****** 2025-09-19 07:06:30.423736 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:30.423745 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:30.423755 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:30.423764 | orchestrator | 2025-09-19 07:06:30.423774 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-19 07:06:30.423784 | orchestrator | Friday 19 September 2025 07:04:40 +0000 (0:00:00.545) 0:00:28.426 ****** 2025-09-19 07:06:30.423794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:06:30.423811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:06:30.423823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:06:30.423839 | orchestrator | 2025-09-19 07:06:30.423849 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-19 07:06:30.423863 | orchestrator | Friday 19 September 2025 07:04:42 +0000 (0:00:01.471) 0:00:29.898 ****** 2025-09-19 07:06:30.423872 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:30.423882 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:30.423892 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:30.423901 | orchestrator | 2025-09-19 07:06:30.423911 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-19 07:06:30.423921 | orchestrator | Friday 19 September 2025 07:04:43 +0000 (0:00:00.990) 0:00:30.888 ****** 2025-09-19 07:06:30.423931 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:30.423941 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:30.423950 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:30.423960 | orchestrator | 2025-09-19 07:06:30.423970 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-19 07:06:30.423979 | orchestrator | Friday 19 September 2025 07:04:51 +0000 (0:00:08.072) 0:00:38.961 ****** 2025-09-19 07:06:30.423989 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:30.423999 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:30.424008 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:30.424018 | orchestrator | 2025-09-19 07:06:30.424028 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 07:06:30.424038 | orchestrator | 2025-09-19 07:06:30.424047 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 07:06:30.424057 | orchestrator | Friday 19 September 2025 07:04:51 +0000 (0:00:00.379) 0:00:39.341 ****** 2025-09-19 07:06:30.424067 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:30.424076 | orchestrator | 2025-09-19 07:06:30.424086 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 07:06:30.424096 | orchestrator | Friday 19 September 2025 07:04:52 +0000 (0:00:00.667) 0:00:40.008 ****** 2025-09-19 07:06:30.424105 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:06:30.424115 | orchestrator | 2025-09-19 07:06:30.424125 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 07:06:30.424134 | orchestrator | Friday 19 September 2025 07:04:52 +0000 (0:00:00.365) 0:00:40.373 ****** 2025-09-19 07:06:30.424144 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:30.424153 | orchestrator | 2025-09-19 07:06:30.424163 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 07:06:30.424172 | orchestrator | Friday 19 September 2025 07:04:54 +0000 (0:00:01.696) 0:00:42.070 ****** 2025-09-19 07:06:30.424182 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:06:30.424192 | orchestrator | 2025-09-19 07:06:30.424201 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 07:06:30.424211 | orchestrator | 2025-09-19 07:06:30.424237 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 07:06:30.424248 | orchestrator | Friday 19 September 2025 07:05:50 +0000 (0:00:56.086) 0:01:38.156 ****** 2025-09-19 07:06:30.424257 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:30.424267 | orchestrator | 2025-09-19 07:06:30.424282 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 07:06:30.424291 | orchestrator | Friday 19 September 2025 07:05:51 +0000 (0:00:00.544) 0:01:38.701 ****** 2025-09-19 07:06:30.424301 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:06:30.424311 | orchestrator | 2025-09-19 07:06:30.424320 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 07:06:30.424329 | orchestrator | Friday 19 September 2025 07:05:51 +0000 (0:00:00.262) 0:01:38.964 ****** 2025-09-19 07:06:30.424339 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:30.424348 | orchestrator | 2025-09-19 07:06:30.424358 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 07:06:30.424367 | orchestrator | Friday 19 September 2025 07:05:57 +0000 (0:00:06.612) 0:01:45.577 ****** 2025-09-19 07:06:30.424377 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:06:30.424386 | orchestrator | 2025-09-19 07:06:30.424396 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 07:06:30.424405 | orchestrator | 2025-09-19 07:06:30.424415 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 07:06:30.424424 | orchestrator | Friday 19 September 2025 07:06:08 +0000 (0:00:10.332) 0:01:55.910 ****** 2025-09-19 07:06:30.424434 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:30.424443 | orchestrator | 2025-09-19 07:06:30.424458 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 07:06:30.424468 | orchestrator | Friday 19 September 2025 07:06:08 +0000 (0:00:00.677) 0:01:56.588 ****** 2025-09-19 07:06:30.424478 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:06:30.424487 | orchestrator | 2025-09-19 07:06:30.424497 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 07:06:30.424506 | orchestrator | Friday 19 September 2025 07:06:09 +0000 (0:00:00.363) 0:01:56.951 ****** 2025-09-19 07:06:30.424516 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:30.424526 | orchestrator | 2025-09-19 07:06:30.424535 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 07:06:30.424545 | orchestrator | Friday 19 September 2025 07:06:10 +0000 (0:00:01.533) 0:01:58.485 ****** 2025-09-19 07:06:30.424554 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:06:30.424564 | orchestrator | 2025-09-19 07:06:30.424573 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-19 07:06:30.424583 | orchestrator | 2025-09-19 07:06:30.424592 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-19 07:06:30.424602 | orchestrator | Friday 19 September 2025 07:06:25 +0000 (0:00:15.042) 0:02:13.527 ****** 2025-09-19 07:06:30.424612 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:06:30.424621 | orchestrator | 2025-09-19 07:06:30.424631 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-19 07:06:30.424640 | orchestrator | Friday 19 September 2025 07:06:26 +0000 (0:00:00.712) 0:02:14.240 ****** 2025-09-19 07:06:30.424650 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 07:06:30.424659 | orchestrator | enable_outward_rabbitmq_True 2025-09-19 07:06:30.424669 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 07:06:30.424679 | orchestrator | outward_rabbitmq_restart 2025-09-19 07:06:30.424692 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:06:30.424702 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:06:30.424712 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:06:30.424721 | orchestrator | 2025-09-19 07:06:30.424731 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-19 07:06:30.424740 | orchestrator | skipping: no hosts matched 2025-09-19 07:06:30.424750 | orchestrator | 2025-09-19 07:06:30.424760 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-19 07:06:30.424769 | orchestrator | skipping: no hosts matched 2025-09-19 07:06:30.424779 | orchestrator | 2025-09-19 07:06:30.424788 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-19 07:06:30.424803 | orchestrator | skipping: no hosts matched 2025-09-19 07:06:30.424813 | orchestrator | 2025-09-19 07:06:30.424822 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:06:30.424832 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-19 07:06:30.424842 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 07:06:30.424853 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:06:30.424862 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:06:30.424872 | orchestrator | 2025-09-19 07:06:30.424882 | orchestrator | 2025-09-19 07:06:30.424891 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:06:30.424901 | orchestrator | Friday 19 September 2025 07:06:28 +0000 (0:00:02.286) 0:02:16.527 ****** 2025-09-19 07:06:30.424910 | orchestrator | =============================================================================== 2025-09-19 07:06:30.424920 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 81.46s 2025-09-19 07:06:30.424929 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.84s 2025-09-19 07:06:30.424939 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.07s 2025-09-19 07:06:30.424948 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.98s 2025-09-19 07:06:30.424958 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.88s 2025-09-19 07:06:30.424967 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.30s 2025-09-19 07:06:30.424976 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.29s 2025-09-19 07:06:30.424986 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.13s 2025-09-19 07:06:30.424996 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.94s 2025-09-19 07:06:30.425005 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.89s 2025-09-19 07:06:30.425014 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.82s 2025-09-19 07:06:30.425024 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.66s 2025-09-19 07:06:30.425034 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.49s 2025-09-19 07:06:30.425043 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.47s 2025-09-19 07:06:30.425052 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.17s 2025-09-19 07:06:30.425062 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.05s 2025-09-19 07:06:30.425072 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.99s 2025-09-19 07:06:30.425086 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.99s 2025-09-19 07:06:30.425096 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.98s 2025-09-19 07:06:30.425106 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.85s 2025-09-19 07:06:30.425116 | orchestrator | 2025-09-19 07:06:30 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:30.425126 | orchestrator | 2025-09-19 07:06:30 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:30.425136 | orchestrator | 2025-09-19 07:06:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:33.451264 | orchestrator | 2025-09-19 07:06:33 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:33.451705 | orchestrator | 2025-09-19 07:06:33 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:33.452670 | orchestrator | 2025-09-19 07:06:33 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:33.452705 | orchestrator | 2025-09-19 07:06:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:36.481460 | orchestrator | 2025-09-19 07:06:36 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:36.482516 | orchestrator | 2025-09-19 07:06:36 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:36.486435 | orchestrator | 2025-09-19 07:06:36 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:36.486450 | orchestrator | 2025-09-19 07:06:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:39.519343 | orchestrator | 2025-09-19 07:06:39 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:39.519449 | orchestrator | 2025-09-19 07:06:39 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:39.520274 | orchestrator | 2025-09-19 07:06:39 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:39.520299 | orchestrator | 2025-09-19 07:06:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:42.566701 | orchestrator | 2025-09-19 07:06:42 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:42.568664 | orchestrator | 2025-09-19 07:06:42 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:42.568908 | orchestrator | 2025-09-19 07:06:42 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:42.569075 | orchestrator | 2025-09-19 07:06:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:45.602178 | orchestrator | 2025-09-19 07:06:45 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:45.602862 | orchestrator | 2025-09-19 07:06:45 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:45.603349 | orchestrator | 2025-09-19 07:06:45 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:45.603517 | orchestrator | 2025-09-19 07:06:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:48.643714 | orchestrator | 2025-09-19 07:06:48 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:48.645673 | orchestrator | 2025-09-19 07:06:48 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:48.645728 | orchestrator | 2025-09-19 07:06:48 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:48.645744 | orchestrator | 2025-09-19 07:06:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:51.682420 | orchestrator | 2025-09-19 07:06:51 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:51.682508 | orchestrator | 2025-09-19 07:06:51 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:51.683406 | orchestrator | 2025-09-19 07:06:51 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:51.683429 | orchestrator | 2025-09-19 07:06:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:54.716267 | orchestrator | 2025-09-19 07:06:54 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:54.717098 | orchestrator | 2025-09-19 07:06:54 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:54.718563 | orchestrator | 2025-09-19 07:06:54 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:54.718980 | orchestrator | 2025-09-19 07:06:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:06:57.768320 | orchestrator | 2025-09-19 07:06:57 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:06:57.769571 | orchestrator | 2025-09-19 07:06:57 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:06:57.771789 | orchestrator | 2025-09-19 07:06:57 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:06:57.772271 | orchestrator | 2025-09-19 07:06:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:00.825095 | orchestrator | 2025-09-19 07:07:00 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:00.826838 | orchestrator | 2025-09-19 07:07:00 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:00.828978 | orchestrator | 2025-09-19 07:07:00 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:07:00.829283 | orchestrator | 2025-09-19 07:07:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:03.874270 | orchestrator | 2025-09-19 07:07:03 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:03.876500 | orchestrator | 2025-09-19 07:07:03 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:03.877612 | orchestrator | 2025-09-19 07:07:03 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:07:03.877835 | orchestrator | 2025-09-19 07:07:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:06.917076 | orchestrator | 2025-09-19 07:07:06 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:06.918463 | orchestrator | 2025-09-19 07:07:06 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:06.920099 | orchestrator | 2025-09-19 07:07:06 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:07:06.920574 | orchestrator | 2025-09-19 07:07:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:09.970269 | orchestrator | 2025-09-19 07:07:09 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:09.971814 | orchestrator | 2025-09-19 07:07:09 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:09.974616 | orchestrator | 2025-09-19 07:07:09 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:07:09.974760 | orchestrator | 2025-09-19 07:07:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:13.008537 | orchestrator | 2025-09-19 07:07:13 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:13.009132 | orchestrator | 2025-09-19 07:07:13 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:13.011793 | orchestrator | 2025-09-19 07:07:13 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:07:13.011943 | orchestrator | 2025-09-19 07:07:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:16.068013 | orchestrator | 2025-09-19 07:07:16 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:16.069413 | orchestrator | 2025-09-19 07:07:16 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:16.071319 | orchestrator | 2025-09-19 07:07:16 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:07:16.071730 | orchestrator | 2025-09-19 07:07:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:19.117511 | orchestrator | 2025-09-19 07:07:19 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:19.118409 | orchestrator | 2025-09-19 07:07:19 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:19.119345 | orchestrator | 2025-09-19 07:07:19 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:07:19.119453 | orchestrator | 2025-09-19 07:07:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:22.184979 | orchestrator | 2025-09-19 07:07:22 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:22.185066 | orchestrator | 2025-09-19 07:07:22 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:22.185075 | orchestrator | 2025-09-19 07:07:22 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state STARTED 2025-09-19 07:07:22.185082 | orchestrator | 2025-09-19 07:07:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:25.228316 | orchestrator | 2025-09-19 07:07:25 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:25.229393 | orchestrator | 2025-09-19 07:07:25 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:25.231785 | orchestrator | 2025-09-19 07:07:25 | INFO  | Task 0706fe50-6549-437c-b991-c8d241943484 is in state SUCCESS 2025-09-19 07:07:25.232046 | orchestrator | 2025-09-19 07:07:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:25.234266 | orchestrator | 2025-09-19 07:07:25.234304 | orchestrator | 2025-09-19 07:07:25.234316 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:07:25.234328 | orchestrator | 2025-09-19 07:07:25.234339 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:07:25.234351 | orchestrator | Friday 19 September 2025 07:05:02 +0000 (0:00:00.331) 0:00:00.331 ****** 2025-09-19 07:07:25.234362 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.234375 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.234386 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:07:25.234396 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.234407 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:07:25.234418 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:07:25.234428 | orchestrator | 2025-09-19 07:07:25.234440 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:07:25.234451 | orchestrator | Friday 19 September 2025 07:05:03 +0000 (0:00:01.184) 0:00:01.515 ****** 2025-09-19 07:07:25.234500 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-19 07:07:25.234513 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-19 07:07:25.234540 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-19 07:07:25.234551 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-19 07:07:25.234598 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-19 07:07:25.234610 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-19 07:07:25.234622 | orchestrator | 2025-09-19 07:07:25.234634 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-19 07:07:25.234646 | orchestrator | 2025-09-19 07:07:25.234742 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-19 07:07:25.234756 | orchestrator | Friday 19 September 2025 07:05:05 +0000 (0:00:02.065) 0:00:03.580 ****** 2025-09-19 07:07:25.234768 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:07:25.234804 | orchestrator | 2025-09-19 07:07:25.234816 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-19 07:07:25.234827 | orchestrator | Friday 19 September 2025 07:05:06 +0000 (0:00:01.026) 0:00:04.607 ****** 2025-09-19 07:07:25.234841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.234856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.234870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.234884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.234896 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.234910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.234923 | orchestrator | 2025-09-19 07:07:25.234950 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-19 07:07:25.234964 | orchestrator | Friday 19 September 2025 07:05:07 +0000 (0:00:01.239) 0:00:05.846 ****** 2025-09-19 07:07:25.234977 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.234997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235057 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235070 | orchestrator | 2025-09-19 07:07:25.235084 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-19 07:07:25.235097 | orchestrator | Friday 19 September 2025 07:05:09 +0000 (0:00:01.757) 0:00:07.604 ****** 2025-09-19 07:07:25.235109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235204 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235245 | orchestrator | 2025-09-19 07:07:25.235257 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-19 07:07:25.235269 | orchestrator | Friday 19 September 2025 07:05:10 +0000 (0:00:01.523) 0:00:09.128 ****** 2025-09-19 07:07:25.235280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235313 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235324 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235335 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235346 | orchestrator | 2025-09-19 07:07:25.235363 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-19 07:07:25.235374 | orchestrator | Friday 19 September 2025 07:05:12 +0000 (0:00:01.957) 0:00:11.085 ****** 2025-09-19 07:07:25.235386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235432 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235454 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.235465 | orchestrator | 2025-09-19 07:07:25.235476 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-19 07:07:25.235487 | orchestrator | Friday 19 September 2025 07:05:14 +0000 (0:00:01.287) 0:00:12.373 ****** 2025-09-19 07:07:25.235498 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:07:25.235509 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:07:25.235520 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:07:25.235530 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:07:25.235541 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:07:25.235552 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:07:25.235563 | orchestrator | 2025-09-19 07:07:25.235573 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-19 07:07:25.235584 | orchestrator | Friday 19 September 2025 07:05:17 +0000 (0:00:03.070) 0:00:15.443 ****** 2025-09-19 07:07:25.235595 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-19 07:07:25.235606 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-19 07:07:25.235617 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-19 07:07:25.235627 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-19 07:07:25.235645 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-19 07:07:25.235655 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-19 07:07:25.235666 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 07:07:25.235677 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 07:07:25.235694 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 07:07:25.235705 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 07:07:25.235716 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 07:07:25.235727 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 07:07:25.235737 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 07:07:25.235750 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 07:07:25.235766 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 07:07:25.235777 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 07:07:25.235788 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 07:07:25.235799 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 07:07:25.235810 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 07:07:25.235821 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 07:07:25.235832 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 07:07:25.235842 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 07:07:25.235853 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 07:07:25.235864 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 07:07:25.235875 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 07:07:25.235885 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 07:07:25.235896 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 07:07:25.235907 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 07:07:25.235917 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 07:07:25.235928 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 07:07:25.235938 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 07:07:25.235950 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 07:07:25.235960 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 07:07:25.235971 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 07:07:25.235988 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 07:07:25.235999 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 07:07:25.236010 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 07:07:25.236021 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 07:07:25.236032 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 07:07:25.236042 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 07:07:25.236053 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 07:07:25.236064 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 07:07:25.236074 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-19 07:07:25.236085 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-19 07:07:25.236101 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-19 07:07:25.236113 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-19 07:07:25.236123 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-19 07:07:25.236134 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-19 07:07:25.236145 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 07:07:25.236161 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 07:07:25.236201 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 07:07:25.236213 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 07:07:25.236224 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 07:07:25.236235 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 07:07:25.236246 | orchestrator | 2025-09-19 07:07:25.236256 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 07:07:25.236268 | orchestrator | Friday 19 September 2025 07:05:36 +0000 (0:00:19.156) 0:00:34.600 ****** 2025-09-19 07:07:25.236278 | orchestrator | 2025-09-19 07:07:25.236289 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 07:07:25.236300 | orchestrator | Friday 19 September 2025 07:05:36 +0000 (0:00:00.289) 0:00:34.890 ****** 2025-09-19 07:07:25.236311 | orchestrator | 2025-09-19 07:07:25.236322 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 07:07:25.236333 | orchestrator | Friday 19 September 2025 07:05:36 +0000 (0:00:00.065) 0:00:34.955 ****** 2025-09-19 07:07:25.236343 | orchestrator | 2025-09-19 07:07:25.236354 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 07:07:25.236365 | orchestrator | Friday 19 September 2025 07:05:36 +0000 (0:00:00.063) 0:00:35.018 ****** 2025-09-19 07:07:25.236382 | orchestrator | 2025-09-19 07:07:25.236394 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 07:07:25.236404 | orchestrator | Friday 19 September 2025 07:05:36 +0000 (0:00:00.064) 0:00:35.083 ****** 2025-09-19 07:07:25.236415 | orchestrator | 2025-09-19 07:07:25.236426 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 07:07:25.236437 | orchestrator | Friday 19 September 2025 07:05:36 +0000 (0:00:00.063) 0:00:35.146 ****** 2025-09-19 07:07:25.236448 | orchestrator | 2025-09-19 07:07:25.236458 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-19 07:07:25.236469 | orchestrator | Friday 19 September 2025 07:05:36 +0000 (0:00:00.066) 0:00:35.212 ****** 2025-09-19 07:07:25.236480 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.236491 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:07:25.236501 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.236512 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:07:25.236523 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.236533 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:07:25.236544 | orchestrator | 2025-09-19 07:07:25.236555 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-19 07:07:25.236566 | orchestrator | Friday 19 September 2025 07:05:38 +0000 (0:00:01.899) 0:00:37.112 ****** 2025-09-19 07:07:25.236577 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:07:25.236588 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:07:25.236598 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:07:25.236609 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:07:25.236620 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:07:25.236630 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:07:25.236641 | orchestrator | 2025-09-19 07:07:25.236652 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-19 07:07:25.236663 | orchestrator | 2025-09-19 07:07:25.236674 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 07:07:25.236685 | orchestrator | Friday 19 September 2025 07:06:13 +0000 (0:00:34.260) 0:01:11.372 ****** 2025-09-19 07:07:25.236696 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:07:25.236707 | orchestrator | 2025-09-19 07:07:25.236717 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 07:07:25.236728 | orchestrator | Friday 19 September 2025 07:06:13 +0000 (0:00:00.582) 0:01:11.955 ****** 2025-09-19 07:07:25.236739 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:07:25.236750 | orchestrator | 2025-09-19 07:07:25.236761 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-19 07:07:25.236771 | orchestrator | Friday 19 September 2025 07:06:14 +0000 (0:00:00.503) 0:01:12.458 ****** 2025-09-19 07:07:25.236782 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.236793 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.236804 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.236814 | orchestrator | 2025-09-19 07:07:25.236825 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-19 07:07:25.236836 | orchestrator | Friday 19 September 2025 07:06:15 +0000 (0:00:01.021) 0:01:13.480 ****** 2025-09-19 07:07:25.236847 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.236857 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.236868 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.236885 | orchestrator | 2025-09-19 07:07:25.236896 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-19 07:07:25.236907 | orchestrator | Friday 19 September 2025 07:06:15 +0000 (0:00:00.375) 0:01:13.855 ****** 2025-09-19 07:07:25.236918 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.236929 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.236939 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.236950 | orchestrator | 2025-09-19 07:07:25.236961 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-19 07:07:25.236986 | orchestrator | Friday 19 September 2025 07:06:15 +0000 (0:00:00.416) 0:01:14.271 ****** 2025-09-19 07:07:25.236997 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.237008 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.237019 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.237030 | orchestrator | 2025-09-19 07:07:25.237040 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-19 07:07:25.237051 | orchestrator | Friday 19 September 2025 07:06:16 +0000 (0:00:00.391) 0:01:14.663 ****** 2025-09-19 07:07:25.237062 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.237077 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.237088 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.237099 | orchestrator | 2025-09-19 07:07:25.237110 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-19 07:07:25.237121 | orchestrator | Friday 19 September 2025 07:06:17 +0000 (0:00:00.653) 0:01:15.316 ****** 2025-09-19 07:07:25.237132 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.237143 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.237154 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.237223 | orchestrator | 2025-09-19 07:07:25.237236 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-19 07:07:25.237247 | orchestrator | Friday 19 September 2025 07:06:17 +0000 (0:00:00.394) 0:01:15.711 ****** 2025-09-19 07:07:25.237258 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.237268 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.237279 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.237290 | orchestrator | 2025-09-19 07:07:25.237301 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-19 07:07:25.237312 | orchestrator | Friday 19 September 2025 07:06:17 +0000 (0:00:00.302) 0:01:16.013 ****** 2025-09-19 07:07:25.237323 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.237334 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.237344 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.237355 | orchestrator | 2025-09-19 07:07:25.237366 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-19 07:07:25.237377 | orchestrator | Friday 19 September 2025 07:06:18 +0000 (0:00:00.307) 0:01:16.321 ****** 2025-09-19 07:07:25.237387 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.237398 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.237409 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.237419 | orchestrator | 2025-09-19 07:07:25.237430 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-19 07:07:25.237441 | orchestrator | Friday 19 September 2025 07:06:18 +0000 (0:00:00.505) 0:01:16.826 ****** 2025-09-19 07:07:25.237452 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.237463 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.237473 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.237484 | orchestrator | 2025-09-19 07:07:25.237495 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-19 07:07:25.237506 | orchestrator | Friday 19 September 2025 07:06:18 +0000 (0:00:00.313) 0:01:17.140 ****** 2025-09-19 07:07:25.237517 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.237527 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.237538 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.237549 | orchestrator | 2025-09-19 07:07:25.237560 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-19 07:07:25.237570 | orchestrator | Friday 19 September 2025 07:06:19 +0000 (0:00:00.317) 0:01:17.458 ****** 2025-09-19 07:07:25.237581 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.237592 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.237603 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.237613 | orchestrator | 2025-09-19 07:07:25.237624 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-19 07:07:25.237643 | orchestrator | Friday 19 September 2025 07:06:19 +0000 (0:00:00.304) 0:01:17.762 ****** 2025-09-19 07:07:25.237654 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.237665 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.237676 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.237687 | orchestrator | 2025-09-19 07:07:25.237698 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-19 07:07:25.237708 | orchestrator | Friday 19 September 2025 07:06:19 +0000 (0:00:00.298) 0:01:18.061 ****** 2025-09-19 07:07:25.237719 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.237729 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.237739 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.237748 | orchestrator | 2025-09-19 07:07:25.237758 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-19 07:07:25.237768 | orchestrator | Friday 19 September 2025 07:06:20 +0000 (0:00:00.514) 0:01:18.575 ****** 2025-09-19 07:07:25.237777 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.237787 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.237797 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.237806 | orchestrator | 2025-09-19 07:07:25.237816 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-19 07:07:25.237826 | orchestrator | Friday 19 September 2025 07:06:20 +0000 (0:00:00.324) 0:01:18.900 ****** 2025-09-19 07:07:25.237835 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.237845 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.237854 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.237864 | orchestrator | 2025-09-19 07:07:25.237874 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-19 07:07:25.237883 | orchestrator | Friday 19 September 2025 07:06:20 +0000 (0:00:00.318) 0:01:19.218 ****** 2025-09-19 07:07:25.237893 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.237902 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.237917 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.237927 | orchestrator | 2025-09-19 07:07:25.237937 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 07:07:25.237947 | orchestrator | Friday 19 September 2025 07:06:21 +0000 (0:00:00.317) 0:01:19.536 ****** 2025-09-19 07:07:25.237956 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:07:25.237966 | orchestrator | 2025-09-19 07:07:25.237976 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-19 07:07:25.237985 | orchestrator | Friday 19 September 2025 07:06:22 +0000 (0:00:00.782) 0:01:20.319 ****** 2025-09-19 07:07:25.237995 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.238005 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.238014 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.238064 | orchestrator | 2025-09-19 07:07:25.238074 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-19 07:07:25.238084 | orchestrator | Friday 19 September 2025 07:06:22 +0000 (0:00:00.497) 0:01:20.817 ****** 2025-09-19 07:07:25.238099 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.238109 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.238119 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.238129 | orchestrator | 2025-09-19 07:07:25.238138 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-19 07:07:25.238148 | orchestrator | Friday 19 September 2025 07:06:22 +0000 (0:00:00.467) 0:01:21.284 ****** 2025-09-19 07:07:25.238158 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.238184 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.238194 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.238204 | orchestrator | 2025-09-19 07:07:25.238213 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-19 07:07:25.238223 | orchestrator | Friday 19 September 2025 07:06:23 +0000 (0:00:00.640) 0:01:21.924 ****** 2025-09-19 07:07:25.238241 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.238251 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.238260 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.238270 | orchestrator | 2025-09-19 07:07:25.238279 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-19 07:07:25.238289 | orchestrator | Friday 19 September 2025 07:06:24 +0000 (0:00:00.384) 0:01:22.309 ****** 2025-09-19 07:07:25.238299 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.238308 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.238318 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.238328 | orchestrator | 2025-09-19 07:07:25.238337 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-19 07:07:25.238347 | orchestrator | Friday 19 September 2025 07:06:24 +0000 (0:00:00.364) 0:01:22.674 ****** 2025-09-19 07:07:25.238357 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.238366 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.238376 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.238386 | orchestrator | 2025-09-19 07:07:25.238395 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-19 07:07:25.238405 | orchestrator | Friday 19 September 2025 07:06:24 +0000 (0:00:00.354) 0:01:23.028 ****** 2025-09-19 07:07:25.238414 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.238424 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.238433 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.238443 | orchestrator | 2025-09-19 07:07:25.238452 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-19 07:07:25.238462 | orchestrator | Friday 19 September 2025 07:06:25 +0000 (0:00:00.676) 0:01:23.705 ****** 2025-09-19 07:07:25.238472 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.238481 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.238491 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.238500 | orchestrator | 2025-09-19 07:07:25.238510 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-19 07:07:25.238520 | orchestrator | Friday 19 September 2025 07:06:25 +0000 (0:00:00.366) 0:01:24.071 ****** 2025-09-19 07:07:25.238530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238639 | orchestrator | 2025-09-19 07:07:25.238649 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-19 07:07:25.238659 | orchestrator | Friday 19 September 2025 07:06:27 +0000 (0:00:01.575) 0:01:25.646 ****** 2025-09-19 07:07:25.238669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238807 | orchestrator | 2025-09-19 07:07:25.238816 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-19 07:07:25.238826 | orchestrator | Friday 19 September 2025 07:06:31 +0000 (0:00:03.935) 0:01:29.582 ****** 2025-09-19 07:07:25.238836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.238942 | orchestrator | 2025-09-19 07:07:25.238952 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 07:07:25.238961 | orchestrator | Friday 19 September 2025 07:06:33 +0000 (0:00:02.135) 0:01:31.717 ****** 2025-09-19 07:07:25.238971 | orchestrator | 2025-09-19 07:07:25.238981 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 07:07:25.238990 | orchestrator | Friday 19 September 2025 07:06:33 +0000 (0:00:00.081) 0:01:31.798 ****** 2025-09-19 07:07:25.239000 | orchestrator | 2025-09-19 07:07:25.239009 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 07:07:25.239019 | orchestrator | Friday 19 September 2025 07:06:33 +0000 (0:00:00.077) 0:01:31.875 ****** 2025-09-19 07:07:25.239028 | orchestrator | 2025-09-19 07:07:25.239038 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-19 07:07:25.239047 | orchestrator | Friday 19 September 2025 07:06:33 +0000 (0:00:00.072) 0:01:31.948 ****** 2025-09-19 07:07:25.239057 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:07:25.239067 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:07:25.239076 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:07:25.239086 | orchestrator | 2025-09-19 07:07:25.239095 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-19 07:07:25.239105 | orchestrator | Friday 19 September 2025 07:06:36 +0000 (0:00:02.685) 0:01:34.633 ****** 2025-09-19 07:07:25.239115 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:07:25.239124 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:07:25.239134 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:07:25.239143 | orchestrator | 2025-09-19 07:07:25.239153 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-19 07:07:25.239163 | orchestrator | Friday 19 September 2025 07:06:43 +0000 (0:00:06.761) 0:01:41.395 ****** 2025-09-19 07:07:25.239190 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:07:25.239224 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:07:25.239233 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:07:25.239243 | orchestrator | 2025-09-19 07:07:25.239253 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-19 07:07:25.239269 | orchestrator | Friday 19 September 2025 07:06:45 +0000 (0:00:02.276) 0:01:43.672 ****** 2025-09-19 07:07:25.239279 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.239289 | orchestrator | 2025-09-19 07:07:25.239298 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-19 07:07:25.239308 | orchestrator | Friday 19 September 2025 07:06:45 +0000 (0:00:00.330) 0:01:44.002 ****** 2025-09-19 07:07:25.239317 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.239327 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.239336 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.239346 | orchestrator | 2025-09-19 07:07:25.239356 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-19 07:07:25.239365 | orchestrator | Friday 19 September 2025 07:06:46 +0000 (0:00:00.866) 0:01:44.869 ****** 2025-09-19 07:07:25.239375 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.239384 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.239394 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:07:25.239403 | orchestrator | 2025-09-19 07:07:25.239413 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-19 07:07:25.239423 | orchestrator | Friday 19 September 2025 07:06:47 +0000 (0:00:00.647) 0:01:45.517 ****** 2025-09-19 07:07:25.239432 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.239442 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.239451 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.239461 | orchestrator | 2025-09-19 07:07:25.239470 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-19 07:07:25.239480 | orchestrator | Friday 19 September 2025 07:06:48 +0000 (0:00:00.874) 0:01:46.391 ****** 2025-09-19 07:07:25.239489 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.239499 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.239508 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:07:25.239518 | orchestrator | 2025-09-19 07:07:25.239527 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-19 07:07:25.239537 | orchestrator | Friday 19 September 2025 07:06:48 +0000 (0:00:00.701) 0:01:47.092 ****** 2025-09-19 07:07:25.239547 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.239556 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.239571 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.239581 | orchestrator | 2025-09-19 07:07:25.239591 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-19 07:07:25.239601 | orchestrator | Friday 19 September 2025 07:06:50 +0000 (0:00:01.724) 0:01:48.817 ****** 2025-09-19 07:07:25.239610 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.239620 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.239629 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.239639 | orchestrator | 2025-09-19 07:07:25.239648 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-19 07:07:25.239658 | orchestrator | Friday 19 September 2025 07:06:51 +0000 (0:00:00.859) 0:01:49.677 ****** 2025-09-19 07:07:25.239667 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.239677 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.239686 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.239696 | orchestrator | 2025-09-19 07:07:25.239705 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-19 07:07:25.239715 | orchestrator | Friday 19 September 2025 07:06:51 +0000 (0:00:00.262) 0:01:49.939 ****** 2025-09-19 07:07:25.239729 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239740 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239756 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239766 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239776 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239786 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239796 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239806 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239827 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239837 | orchestrator | 2025-09-19 07:07:25.239847 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-19 07:07:25.239856 | orchestrator | Friday 19 September 2025 07:06:52 +0000 (0:00:01.309) 0:01:51.249 ****** 2025-09-19 07:07:25.239866 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239881 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239900 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239910 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239940 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.239970 | orchestrator | 2025-09-19 07:07:25.239979 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-19 07:07:25.239989 | orchestrator | Friday 19 September 2025 07:06:56 +0000 (0:00:03.645) 0:01:54.894 ****** 2025-09-19 07:07:25.240005 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.240015 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.240035 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.240045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.240055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.240065 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.240075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.240085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.240095 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:07:25.240105 | orchestrator | 2025-09-19 07:07:25.240115 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 07:07:25.240125 | orchestrator | Friday 19 September 2025 07:06:59 +0000 (0:00:02.808) 0:01:57.703 ****** 2025-09-19 07:07:25.240134 | orchestrator | 2025-09-19 07:07:25.240144 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 07:07:25.240154 | orchestrator | Friday 19 September 2025 07:06:59 +0000 (0:00:00.073) 0:01:57.776 ****** 2025-09-19 07:07:25.240205 | orchestrator | 2025-09-19 07:07:25.240217 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 07:07:25.240227 | orchestrator | Friday 19 September 2025 07:06:59 +0000 (0:00:00.071) 0:01:57.848 ****** 2025-09-19 07:07:25.240237 | orchestrator | 2025-09-19 07:07:25.240246 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-19 07:07:25.240256 | orchestrator | Friday 19 September 2025 07:06:59 +0000 (0:00:00.067) 0:01:57.915 ****** 2025-09-19 07:07:25.240266 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:07:25.240276 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:07:25.240292 | orchestrator | 2025-09-19 07:07:25.240308 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-19 07:07:25.240318 | orchestrator | Friday 19 September 2025 07:07:05 +0000 (0:00:06.323) 0:02:04.239 ****** 2025-09-19 07:07:25.240327 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:07:25.240337 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:07:25.240347 | orchestrator | 2025-09-19 07:07:25.240357 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-19 07:07:25.240366 | orchestrator | Friday 19 September 2025 07:07:12 +0000 (0:00:06.288) 0:02:10.527 ****** 2025-09-19 07:07:25.240376 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:07:25.240386 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:07:25.240395 | orchestrator | 2025-09-19 07:07:25.240405 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-19 07:07:25.240415 | orchestrator | Friday 19 September 2025 07:07:18 +0000 (0:00:06.529) 0:02:17.057 ****** 2025-09-19 07:07:25.240425 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:07:25.240434 | orchestrator | 2025-09-19 07:07:25.240444 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-19 07:07:25.240458 | orchestrator | Friday 19 September 2025 07:07:18 +0000 (0:00:00.139) 0:02:17.196 ****** 2025-09-19 07:07:25.240468 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.240478 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.240487 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.240497 | orchestrator | 2025-09-19 07:07:25.240506 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-19 07:07:25.240516 | orchestrator | Friday 19 September 2025 07:07:19 +0000 (0:00:00.869) 0:02:18.065 ****** 2025-09-19 07:07:25.240525 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.240535 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.240544 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:07:25.240554 | orchestrator | 2025-09-19 07:07:25.240563 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-19 07:07:25.240573 | orchestrator | Friday 19 September 2025 07:07:20 +0000 (0:00:00.688) 0:02:18.754 ****** 2025-09-19 07:07:25.240582 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.240592 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.240601 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.240611 | orchestrator | 2025-09-19 07:07:25.240620 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-19 07:07:25.240630 | orchestrator | Friday 19 September 2025 07:07:21 +0000 (0:00:00.854) 0:02:19.609 ****** 2025-09-19 07:07:25.240640 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:07:25.240649 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:07:25.240659 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:07:25.240668 | orchestrator | 2025-09-19 07:07:25.240677 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-19 07:07:25.240687 | orchestrator | Friday 19 September 2025 07:07:21 +0000 (0:00:00.681) 0:02:20.290 ****** 2025-09-19 07:07:25.240696 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.240706 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.240715 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.240725 | orchestrator | 2025-09-19 07:07:25.240733 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-19 07:07:25.240741 | orchestrator | Friday 19 September 2025 07:07:22 +0000 (0:00:00.861) 0:02:21.151 ****** 2025-09-19 07:07:25.240749 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:07:25.240757 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:07:25.240765 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:07:25.240772 | orchestrator | 2025-09-19 07:07:25.240780 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:07:25.240788 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 07:07:25.240796 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-19 07:07:25.240809 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-19 07:07:25.240817 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:07:25.240825 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:07:25.240833 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:07:25.240841 | orchestrator | 2025-09-19 07:07:25.240849 | orchestrator | 2025-09-19 07:07:25.240857 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:07:25.240864 | orchestrator | Friday 19 September 2025 07:07:23 +0000 (0:00:01.009) 0:02:22.160 ****** 2025-09-19 07:07:25.240872 | orchestrator | =============================================================================== 2025-09-19 07:07:25.240880 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.26s 2025-09-19 07:07:25.240888 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.16s 2025-09-19 07:07:25.240896 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.05s 2025-09-19 07:07:25.240904 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.01s 2025-09-19 07:07:25.240911 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.81s 2025-09-19 07:07:25.240919 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.94s 2025-09-19 07:07:25.240927 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.65s 2025-09-19 07:07:25.240939 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.07s 2025-09-19 07:07:25.240947 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.81s 2025-09-19 07:07:25.240955 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.14s 2025-09-19 07:07:25.240962 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.07s 2025-09-19 07:07:25.240970 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.96s 2025-09-19 07:07:25.240978 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.90s 2025-09-19 07:07:25.240986 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.76s 2025-09-19 07:07:25.240994 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.72s 2025-09-19 07:07:25.241001 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.58s 2025-09-19 07:07:25.241013 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.52s 2025-09-19 07:07:25.241021 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.31s 2025-09-19 07:07:25.241029 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.29s 2025-09-19 07:07:25.241037 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.24s 2025-09-19 07:07:28.281245 | orchestrator | 2025-09-19 07:07:28 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:28.286204 | orchestrator | 2025-09-19 07:07:28 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:28.286226 | orchestrator | 2025-09-19 07:07:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:31.342280 | orchestrator | 2025-09-19 07:07:31 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:31.342606 | orchestrator | 2025-09-19 07:07:31 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:31.342674 | orchestrator | 2025-09-19 07:07:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:34.398882 | orchestrator | 2025-09-19 07:07:34 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:34.398988 | orchestrator | 2025-09-19 07:07:34 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:34.399003 | orchestrator | 2025-09-19 07:07:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:37.454755 | orchestrator | 2025-09-19 07:07:37 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:37.454982 | orchestrator | 2025-09-19 07:07:37 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:37.455003 | orchestrator | 2025-09-19 07:07:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:40.505711 | orchestrator | 2025-09-19 07:07:40 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:40.506922 | orchestrator | 2025-09-19 07:07:40 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:40.507286 | orchestrator | 2025-09-19 07:07:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:43.556820 | orchestrator | 2025-09-19 07:07:43 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:43.559070 | orchestrator | 2025-09-19 07:07:43 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:43.559283 | orchestrator | 2025-09-19 07:07:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:46.613723 | orchestrator | 2025-09-19 07:07:46 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:46.615281 | orchestrator | 2025-09-19 07:07:46 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:46.615630 | orchestrator | 2025-09-19 07:07:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:49.665902 | orchestrator | 2025-09-19 07:07:49 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:49.666117 | orchestrator | 2025-09-19 07:07:49 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:49.666601 | orchestrator | 2025-09-19 07:07:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:52.709337 | orchestrator | 2025-09-19 07:07:52 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:52.711371 | orchestrator | 2025-09-19 07:07:52 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:52.711402 | orchestrator | 2025-09-19 07:07:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:55.762826 | orchestrator | 2025-09-19 07:07:55 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:55.764193 | orchestrator | 2025-09-19 07:07:55 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:55.764223 | orchestrator | 2025-09-19 07:07:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:07:58.796183 | orchestrator | 2025-09-19 07:07:58 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:07:58.796292 | orchestrator | 2025-09-19 07:07:58 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:07:58.796812 | orchestrator | 2025-09-19 07:07:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:01.836643 | orchestrator | 2025-09-19 07:08:01 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:01.837990 | orchestrator | 2025-09-19 07:08:01 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:01.838358 | orchestrator | 2025-09-19 07:08:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:04.883180 | orchestrator | 2025-09-19 07:08:04 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:04.883738 | orchestrator | 2025-09-19 07:08:04 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:04.883886 | orchestrator | 2025-09-19 07:08:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:07.919940 | orchestrator | 2025-09-19 07:08:07 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:07.921823 | orchestrator | 2025-09-19 07:08:07 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:07.922234 | orchestrator | 2025-09-19 07:08:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:10.963508 | orchestrator | 2025-09-19 07:08:10 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:10.963783 | orchestrator | 2025-09-19 07:08:10 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:10.963808 | orchestrator | 2025-09-19 07:08:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:14.000338 | orchestrator | 2025-09-19 07:08:13 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:14.008639 | orchestrator | 2025-09-19 07:08:14 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:14.008732 | orchestrator | 2025-09-19 07:08:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:17.057562 | orchestrator | 2025-09-19 07:08:17 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:17.057776 | orchestrator | 2025-09-19 07:08:17 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:17.058086 | orchestrator | 2025-09-19 07:08:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:20.093174 | orchestrator | 2025-09-19 07:08:20 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:20.094688 | orchestrator | 2025-09-19 07:08:20 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:20.094741 | orchestrator | 2025-09-19 07:08:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:23.137971 | orchestrator | 2025-09-19 07:08:23 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:23.138847 | orchestrator | 2025-09-19 07:08:23 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:23.139514 | orchestrator | 2025-09-19 07:08:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:26.180266 | orchestrator | 2025-09-19 07:08:26 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:26.181571 | orchestrator | 2025-09-19 07:08:26 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:26.181844 | orchestrator | 2025-09-19 07:08:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:29.224400 | orchestrator | 2025-09-19 07:08:29 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:29.225835 | orchestrator | 2025-09-19 07:08:29 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:29.225876 | orchestrator | 2025-09-19 07:08:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:32.290201 | orchestrator | 2025-09-19 07:08:32 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:32.291154 | orchestrator | 2025-09-19 07:08:32 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:32.291185 | orchestrator | 2025-09-19 07:08:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:35.326711 | orchestrator | 2025-09-19 07:08:35 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:35.328636 | orchestrator | 2025-09-19 07:08:35 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:35.328710 | orchestrator | 2025-09-19 07:08:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:38.382997 | orchestrator | 2025-09-19 07:08:38 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:38.385177 | orchestrator | 2025-09-19 07:08:38 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:38.385215 | orchestrator | 2025-09-19 07:08:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:41.422866 | orchestrator | 2025-09-19 07:08:41 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:41.423213 | orchestrator | 2025-09-19 07:08:41 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:41.424409 | orchestrator | 2025-09-19 07:08:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:44.466349 | orchestrator | 2025-09-19 07:08:44 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:44.466863 | orchestrator | 2025-09-19 07:08:44 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:44.466896 | orchestrator | 2025-09-19 07:08:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:47.529524 | orchestrator | 2025-09-19 07:08:47 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:47.529930 | orchestrator | 2025-09-19 07:08:47 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:47.530269 | orchestrator | 2025-09-19 07:08:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:50.575612 | orchestrator | 2025-09-19 07:08:50 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:50.575835 | orchestrator | 2025-09-19 07:08:50 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:50.576230 | orchestrator | 2025-09-19 07:08:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:53.619006 | orchestrator | 2025-09-19 07:08:53 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:53.621170 | orchestrator | 2025-09-19 07:08:53 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:53.621201 | orchestrator | 2025-09-19 07:08:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:56.667471 | orchestrator | 2025-09-19 07:08:56 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:56.669822 | orchestrator | 2025-09-19 07:08:56 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:56.669865 | orchestrator | 2025-09-19 07:08:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:08:59.711854 | orchestrator | 2025-09-19 07:08:59 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:08:59.713955 | orchestrator | 2025-09-19 07:08:59 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:08:59.714100 | orchestrator | 2025-09-19 07:08:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:02.759130 | orchestrator | 2025-09-19 07:09:02 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:02.760184 | orchestrator | 2025-09-19 07:09:02 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:02.760245 | orchestrator | 2025-09-19 07:09:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:05.796572 | orchestrator | 2025-09-19 07:09:05 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:05.801257 | orchestrator | 2025-09-19 07:09:05 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:05.801338 | orchestrator | 2025-09-19 07:09:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:08.849326 | orchestrator | 2025-09-19 07:09:08 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:08.851586 | orchestrator | 2025-09-19 07:09:08 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:08.851620 | orchestrator | 2025-09-19 07:09:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:11.887805 | orchestrator | 2025-09-19 07:09:11 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:11.888532 | orchestrator | 2025-09-19 07:09:11 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:11.888580 | orchestrator | 2025-09-19 07:09:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:14.940319 | orchestrator | 2025-09-19 07:09:14 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:14.941226 | orchestrator | 2025-09-19 07:09:14 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:14.941279 | orchestrator | 2025-09-19 07:09:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:17.987185 | orchestrator | 2025-09-19 07:09:17 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:17.990319 | orchestrator | 2025-09-19 07:09:17 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:17.990554 | orchestrator | 2025-09-19 07:09:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:21.046265 | orchestrator | 2025-09-19 07:09:21 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:21.047062 | orchestrator | 2025-09-19 07:09:21 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:21.047095 | orchestrator | 2025-09-19 07:09:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:24.088496 | orchestrator | 2025-09-19 07:09:24 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:24.088705 | orchestrator | 2025-09-19 07:09:24 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:24.088895 | orchestrator | 2025-09-19 07:09:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:27.135545 | orchestrator | 2025-09-19 07:09:27 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:27.136523 | orchestrator | 2025-09-19 07:09:27 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:27.136553 | orchestrator | 2025-09-19 07:09:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:30.175618 | orchestrator | 2025-09-19 07:09:30 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:30.177545 | orchestrator | 2025-09-19 07:09:30 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:30.177578 | orchestrator | 2025-09-19 07:09:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:33.230496 | orchestrator | 2025-09-19 07:09:33 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:33.230731 | orchestrator | 2025-09-19 07:09:33 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:33.230755 | orchestrator | 2025-09-19 07:09:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:36.275521 | orchestrator | 2025-09-19 07:09:36 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:36.277564 | orchestrator | 2025-09-19 07:09:36 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:36.277711 | orchestrator | 2025-09-19 07:09:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:39.326210 | orchestrator | 2025-09-19 07:09:39 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:39.327456 | orchestrator | 2025-09-19 07:09:39 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:39.327499 | orchestrator | 2025-09-19 07:09:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:42.369563 | orchestrator | 2025-09-19 07:09:42 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:42.371391 | orchestrator | 2025-09-19 07:09:42 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:42.371447 | orchestrator | 2025-09-19 07:09:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:45.410360 | orchestrator | 2025-09-19 07:09:45 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:45.411835 | orchestrator | 2025-09-19 07:09:45 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:45.411932 | orchestrator | 2025-09-19 07:09:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:48.453541 | orchestrator | 2025-09-19 07:09:48 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:48.453813 | orchestrator | 2025-09-19 07:09:48 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:48.454439 | orchestrator | 2025-09-19 07:09:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:51.502496 | orchestrator | 2025-09-19 07:09:51 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:51.504702 | orchestrator | 2025-09-19 07:09:51 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:51.505457 | orchestrator | 2025-09-19 07:09:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:54.545324 | orchestrator | 2025-09-19 07:09:54 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:54.545972 | orchestrator | 2025-09-19 07:09:54 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:54.546123 | orchestrator | 2025-09-19 07:09:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:09:57.585477 | orchestrator | 2025-09-19 07:09:57 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:09:57.587418 | orchestrator | 2025-09-19 07:09:57 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:09:57.587462 | orchestrator | 2025-09-19 07:09:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:00.642273 | orchestrator | 2025-09-19 07:10:00 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:10:00.643689 | orchestrator | 2025-09-19 07:10:00 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:00.643736 | orchestrator | 2025-09-19 07:10:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:03.697598 | orchestrator | 2025-09-19 07:10:03 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:10:03.698391 | orchestrator | 2025-09-19 07:10:03 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:03.698425 | orchestrator | 2025-09-19 07:10:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:06.746806 | orchestrator | 2025-09-19 07:10:06 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:10:06.747883 | orchestrator | 2025-09-19 07:10:06 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:06.747958 | orchestrator | 2025-09-19 07:10:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:09.793211 | orchestrator | 2025-09-19 07:10:09 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:10:09.793421 | orchestrator | 2025-09-19 07:10:09 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:09.794422 | orchestrator | 2025-09-19 07:10:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:12.826754 | orchestrator | 2025-09-19 07:10:12 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:10:12.827828 | orchestrator | 2025-09-19 07:10:12 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:12.828804 | orchestrator | 2025-09-19 07:10:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:15.869265 | orchestrator | 2025-09-19 07:10:15 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:10:15.870795 | orchestrator | 2025-09-19 07:10:15 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:15.871031 | orchestrator | 2025-09-19 07:10:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:18.913495 | orchestrator | 2025-09-19 07:10:18 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:10:18.916224 | orchestrator | 2025-09-19 07:10:18 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:18.916276 | orchestrator | 2025-09-19 07:10:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:21.966961 | orchestrator | 2025-09-19 07:10:21 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state STARTED 2025-09-19 07:10:21.967456 | orchestrator | 2025-09-19 07:10:21 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:21.967493 | orchestrator | 2025-09-19 07:10:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:25.018684 | orchestrator | 2025-09-19 07:10:25.018781 | orchestrator | 2025-09-19 07:10:25.018791 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:10:25.018799 | orchestrator | 2025-09-19 07:10:25.018806 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:10:25.018813 | orchestrator | Friday 19 September 2025 07:03:52 +0000 (0:00:00.289) 0:00:00.289 ****** 2025-09-19 07:10:25.018819 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.018827 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.018833 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.018840 | orchestrator | 2025-09-19 07:10:25.018846 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:10:25.018870 | orchestrator | Friday 19 September 2025 07:03:53 +0000 (0:00:00.292) 0:00:00.582 ****** 2025-09-19 07:10:25.018877 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-19 07:10:25.018883 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-19 07:10:25.018889 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-19 07:10:25.018895 | orchestrator | 2025-09-19 07:10:25.018914 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-19 07:10:25.018920 | orchestrator | 2025-09-19 07:10:25.018926 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-19 07:10:25.018931 | orchestrator | Friday 19 September 2025 07:03:53 +0000 (0:00:00.591) 0:00:01.173 ****** 2025-09-19 07:10:25.018938 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.018944 | orchestrator | 2025-09-19 07:10:25.018950 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-19 07:10:25.018956 | orchestrator | Friday 19 September 2025 07:03:54 +0000 (0:00:00.717) 0:00:01.891 ****** 2025-09-19 07:10:25.018961 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.019015 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.019023 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.019029 | orchestrator | 2025-09-19 07:10:25.019035 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-19 07:10:25.019041 | orchestrator | Friday 19 September 2025 07:03:55 +0000 (0:00:00.954) 0:00:02.846 ****** 2025-09-19 07:10:25.019047 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.019080 | orchestrator | 2025-09-19 07:10:25.019086 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-19 07:10:25.019093 | orchestrator | Friday 19 September 2025 07:03:56 +0000 (0:00:00.711) 0:00:03.558 ****** 2025-09-19 07:10:25.019099 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.019105 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.019112 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.019118 | orchestrator | 2025-09-19 07:10:25.019124 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-19 07:10:25.019130 | orchestrator | Friday 19 September 2025 07:03:56 +0000 (0:00:00.679) 0:00:04.237 ****** 2025-09-19 07:10:25.019137 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 07:10:25.019164 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 07:10:25.019170 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 07:10:25.019176 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 07:10:25.019183 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 07:10:25.019189 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 07:10:25.019197 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 07:10:25.019203 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 07:10:25.019210 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 07:10:25.019217 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 07:10:25.019223 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 07:10:25.019230 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 07:10:25.019236 | orchestrator | 2025-09-19 07:10:25.019243 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 07:10:25.019272 | orchestrator | Friday 19 September 2025 07:04:00 +0000 (0:00:03.396) 0:00:07.634 ****** 2025-09-19 07:10:25.019279 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-19 07:10:25.019285 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-19 07:10:25.019292 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-19 07:10:25.019298 | orchestrator | 2025-09-19 07:10:25.019305 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 07:10:25.019311 | orchestrator | Friday 19 September 2025 07:04:01 +0000 (0:00:00.926) 0:00:08.560 ****** 2025-09-19 07:10:25.019318 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-19 07:10:25.019324 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-19 07:10:25.019329 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-19 07:10:25.019333 | orchestrator | 2025-09-19 07:10:25.019337 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 07:10:25.019340 | orchestrator | Friday 19 September 2025 07:04:02 +0000 (0:00:01.605) 0:00:10.166 ****** 2025-09-19 07:10:25.019344 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-19 07:10:25.019348 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.019363 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-19 07:10:25.019367 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.019371 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-19 07:10:25.019375 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.019378 | orchestrator | 2025-09-19 07:10:25.019382 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-19 07:10:25.019386 | orchestrator | Friday 19 September 2025 07:04:03 +0000 (0:00:00.685) 0:00:10.852 ****** 2025-09-19 07:10:25.019396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.019405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.019409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.019413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.019422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.019429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.019434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:10:25.019441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:10:25.019445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:10:25.019449 | orchestrator | 2025-09-19 07:10:25.019453 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-19 07:10:25.019457 | orchestrator | Friday 19 September 2025 07:04:05 +0000 (0:00:02.560) 0:00:13.413 ****** 2025-09-19 07:10:25.019460 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.019464 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.019468 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.019474 | orchestrator | 2025-09-19 07:10:25.019480 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-19 07:10:25.019486 | orchestrator | Friday 19 September 2025 07:04:06 +0000 (0:00:01.009) 0:00:14.422 ****** 2025-09-19 07:10:25.019492 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-19 07:10:25.019503 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-19 07:10:25.019509 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-19 07:10:25.019515 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-19 07:10:25.019521 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-19 07:10:25.019527 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-19 07:10:25.019534 | orchestrator | 2025-09-19 07:10:25.019540 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-19 07:10:25.019547 | orchestrator | Friday 19 September 2025 07:04:08 +0000 (0:00:01.646) 0:00:16.069 ****** 2025-09-19 07:10:25.019552 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.019556 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.019560 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.019564 | orchestrator | 2025-09-19 07:10:25.019570 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-19 07:10:25.019576 | orchestrator | Friday 19 September 2025 07:04:09 +0000 (0:00:01.157) 0:00:17.227 ****** 2025-09-19 07:10:25.019582 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.019588 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.019594 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.019599 | orchestrator | 2025-09-19 07:10:25.019605 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-19 07:10:25.019610 | orchestrator | Friday 19 September 2025 07:04:11 +0000 (0:00:01.554) 0:00:18.781 ****** 2025-09-19 07:10:25.019616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.019630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.019637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.019673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f0683f7368fd4f18f9434c61d302dee1144580ad', '__omit_place_holder__f0683f7368fd4f18f9434c61d302dee1144580ad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 07:10:25.019686 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.019693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.019699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.019706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.019712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f0683f7368fd4f18f9434c61d302dee1144580ad', '__omit_place_holder__f0683f7368fd4f18f9434c61d302dee1144580ad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 07:10:25.019741 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.019767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.019774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.019786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.019792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f0683f7368fd4f18f9434c61d302dee1144580ad', '__omit_place_holder__f0683f7368fd4f18f9434c61d302dee1144580ad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 07:10:25.019798 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.019804 | orchestrator | 2025-09-19 07:10:25.019810 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-19 07:10:25.019816 | orchestrator | Friday 19 September 2025 07:04:12 +0000 (0:00:00.809) 0:00:19.590 ****** 2025-09-19 07:10:25.019822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.019828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.019902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.019913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.019928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.019936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f0683f7368fd4f18f9434c61d302dee1144580ad', '__omit_place_holder__f0683f7368fd4f18f9434c61d302dee1144580ad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 07:10:25.019944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.019950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.019956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f0683f7368fd4f18f9434c61d302dee1144580ad', '__omit_place_holder__f0683f7368fd4f18f9434c61d302dee1144580ad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 07:10:25.019986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.019997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.020014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f0683f7368fd4f18f9434c61d302dee1144580ad', '__omit_place_holder__f0683f7368fd4f18f9434c61d302dee1144580ad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 07:10:25.020020 | orchestrator | 2025-09-19 07:10:25.020025 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-19 07:10:25.020031 | orchestrator | Friday 19 September 2025 07:04:16 +0000 (0:00:04.552) 0:00:24.143 ****** 2025-09-19 07:10:25.020037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.020043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.020048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.020081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.020091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.020102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.020108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:10:25.020114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:10:25.020121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:10:25.020127 | orchestrator | 2025-09-19 07:10:25.020134 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-19 07:10:25.020140 | orchestrator | Friday 19 September 2025 07:04:20 +0000 (0:00:03.635) 0:00:27.779 ****** 2025-09-19 07:10:25.020147 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 07:10:25.020154 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 07:10:25.020160 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 07:10:25.020167 | orchestrator | 2025-09-19 07:10:25.020173 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-19 07:10:25.020179 | orchestrator | Friday 19 September 2025 07:04:23 +0000 (0:00:03.144) 0:00:30.924 ****** 2025-09-19 07:10:25.020185 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 07:10:25.020192 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 07:10:25.020199 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 07:10:25.020209 | orchestrator | 2025-09-19 07:10:25.020386 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-19 07:10:25.020459 | orchestrator | Friday 19 September 2025 07:04:27 +0000 (0:00:04.167) 0:00:35.091 ****** 2025-09-19 07:10:25.020467 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.020474 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.020480 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.020486 | orchestrator | 2025-09-19 07:10:25.020493 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-19 07:10:25.020499 | orchestrator | Friday 19 September 2025 07:04:29 +0000 (0:00:01.715) 0:00:36.807 ****** 2025-09-19 07:10:25.020506 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 07:10:25.020513 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 07:10:25.020525 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 07:10:25.020531 | orchestrator | 2025-09-19 07:10:25.020536 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-19 07:10:25.020542 | orchestrator | Friday 19 September 2025 07:04:32 +0000 (0:00:03.031) 0:00:39.838 ****** 2025-09-19 07:10:25.020548 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 07:10:25.020555 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 07:10:25.020561 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 07:10:25.020568 | orchestrator | 2025-09-19 07:10:25.020574 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-19 07:10:25.020580 | orchestrator | Friday 19 September 2025 07:04:34 +0000 (0:00:02.394) 0:00:42.232 ****** 2025-09-19 07:10:25.020587 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-19 07:10:25.020593 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-19 07:10:25.020600 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-19 07:10:25.020606 | orchestrator | 2025-09-19 07:10:25.020612 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-19 07:10:25.020618 | orchestrator | Friday 19 September 2025 07:04:36 +0000 (0:00:02.007) 0:00:44.240 ****** 2025-09-19 07:10:25.020624 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-19 07:10:25.020630 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-19 07:10:25.020636 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-19 07:10:25.020643 | orchestrator | 2025-09-19 07:10:25.020649 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-19 07:10:25.020656 | orchestrator | Friday 19 September 2025 07:04:38 +0000 (0:00:01.711) 0:00:45.951 ****** 2025-09-19 07:10:25.020663 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.020670 | orchestrator | 2025-09-19 07:10:25.020677 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-19 07:10:25.020684 | orchestrator | Friday 19 September 2025 07:04:39 +0000 (0:00:00.817) 0:00:46.768 ****** 2025-09-19 07:10:25.020692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.020710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.020728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.020742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.020749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.020756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.020763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:10:25.020776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:10:25.020783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:10:25.020790 | orchestrator | 2025-09-19 07:10:25.020797 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-19 07:10:25.020804 | orchestrator | Friday 19 September 2025 07:04:42 +0000 (0:00:03.494) 0:00:50.263 ****** 2025-09-19 07:10:25.020817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.020828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.020837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.020844 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.020852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.020860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.020872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.020879 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.020887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.020899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.020909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.020916 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.020923 | orchestrator | 2025-09-19 07:10:25.020930 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-19 07:10:25.020937 | orchestrator | Friday 19 September 2025 07:04:44 +0000 (0:00:01.305) 0:00:51.568 ****** 2025-09-19 07:10:25.020944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.020951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.020963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.020991 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.020998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.021043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.021057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.021065 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.021074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.021083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.021096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.021104 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.021113 | orchestrator | 2025-09-19 07:10:25.021158 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-19 07:10:25.021167 | orchestrator | Friday 19 September 2025 07:04:45 +0000 (0:00:01.009) 0:00:52.578 ****** 2025-09-19 07:10:25.021175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.021187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.021195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.021203 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.021214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.021222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.021238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.021245 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.021253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.021261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.021273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.021281 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.021288 | orchestrator | 2025-09-19 07:10:25.021296 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-19 07:10:25.021303 | orchestrator | Friday 19 September 2025 07:04:46 +0000 (0:00:01.019) 0:00:53.598 ****** 2025-09-19 07:10:25.021313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.021321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.021331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.021338 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.021345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.021353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.021360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.021367 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.021378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.021387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.021398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.021404 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.021411 | orchestrator | 2025-09-19 07:10:25.021417 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-19 07:10:25.021424 | orchestrator | Friday 19 September 2025 07:04:46 +0000 (0:00:00.714) 0:00:54.312 ****** 2025-09-19 07:10:25.021699 | orchestrator | skipping: [testbed-node-0] => (item={'k2025-09-19 07:10:25 | INFO  | Task 7d5269ad-b38b-4d7d-92f2-d5f3af7fa669 is in state SUCCESS 2025-09-19 07:10:25.021727 | orchestrator | ey': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.021736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.021743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.021750 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.021757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.021770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.021785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.021792 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.021806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.021813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.021820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.021826 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.021832 | orchestrator | 2025-09-19 07:10:25.021838 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-19 07:10:25.021844 | orchestrator | Friday 19 September 2025 07:04:48 +0000 (0:00:01.543) 0:00:55.856 ****** 2025-09-19 07:10:25.021850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.021857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.021872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.021878 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.021884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.021896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.021903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.021909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.021916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.021927 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.021934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.021940 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.021946 | orchestrator | 2025-09-19 07:10:25.021956 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-19 07:10:25.021963 | orchestrator | Friday 19 September 2025 07:04:50 +0000 (0:00:02.069) 0:00:57.925 ****** 2025-09-19 07:10:25.022067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.022083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.022090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.022096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.022109 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.022115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.022131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.022137 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.022146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.022153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.022163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.022169 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.022175 | orchestrator | 2025-09-19 07:10:25.022181 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-19 07:10:25.022186 | orchestrator | Friday 19 September 2025 07:04:50 +0000 (0:00:00.513) 0:00:58.439 ****** 2025-09-19 07:10:25.022192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.022199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.022210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.022216 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.022225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.022232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.022242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.022248 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.022255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 07:10:25.022261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 07:10:25.022267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 07:10:25.022281 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.022286 | orchestrator | 2025-09-19 07:10:25.022292 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-19 07:10:25.022298 | orchestrator | Friday 19 September 2025 07:04:51 +0000 (0:00:00.917) 0:00:59.356 ****** 2025-09-19 07:10:25.022305 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 07:10:25.022311 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 07:10:25.022318 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 07:10:25.022324 | orchestrator | 2025-09-19 07:10:25.022330 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-19 07:10:25.022337 | orchestrator | Friday 19 September 2025 07:04:53 +0000 (0:00:01.839) 0:01:01.195 ****** 2025-09-19 07:10:25.022343 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 07:10:25.022350 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 07:10:25.022357 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 07:10:25.022363 | orchestrator | 2025-09-19 07:10:25.022369 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-19 07:10:25.022380 | orchestrator | Friday 19 September 2025 07:04:55 +0000 (0:00:01.471) 0:01:02.667 ****** 2025-09-19 07:10:25.022387 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 07:10:25.022393 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 07:10:25.022400 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 07:10:25.022406 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 07:10:25.022412 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.022418 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 07:10:25.022424 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.022430 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 07:10:25.022436 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.022442 | orchestrator | 2025-09-19 07:10:25.022448 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-19 07:10:25.022454 | orchestrator | Friday 19 September 2025 07:04:55 +0000 (0:00:00.758) 0:01:03.426 ****** 2025-09-19 07:10:25.022465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.022472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.022484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 07:10:25.022491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.022501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.022508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 07:10:25.022519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:10:25.022526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:10:25.022537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 07:10:25.022544 | orchestrator | 2025-09-19 07:10:25.022550 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-19 07:10:25.022556 | orchestrator | Friday 19 September 2025 07:04:58 +0000 (0:00:02.557) 0:01:05.983 ****** 2025-09-19 07:10:25.022562 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.022568 | orchestrator | 2025-09-19 07:10:25.022574 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-19 07:10:25.022581 | orchestrator | Friday 19 September 2025 07:04:59 +0000 (0:00:00.834) 0:01:06.817 ****** 2025-09-19 07:10:25.022588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 07:10:25.022598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.022621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.022634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.023950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 07:10:25.024023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.024032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 07:10:25.024057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.024075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024088 | orchestrator | 2025-09-19 07:10:25.024094 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-19 07:10:25.024100 | orchestrator | Friday 19 September 2025 07:05:06 +0000 (0:00:07.536) 0:01:14.354 ****** 2025-09-19 07:10:25.024107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 07:10:25.024113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.024122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024138 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.024150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 07:10:25.024156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.024162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 07:10:25.024177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024183 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.024189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.024202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024215 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.024220 | orchestrator | 2025-09-19 07:10:25.024226 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-19 07:10:25.024232 | orchestrator | Friday 19 September 2025 07:05:08 +0000 (0:00:01.360) 0:01:15.714 ****** 2025-09-19 07:10:25.024238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 07:10:25.024245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 07:10:25.024253 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.024259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 07:10:25.024264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 07:10:25.024270 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.024276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 07:10:25.024282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 07:10:25.024288 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.024293 | orchestrator | 2025-09-19 07:10:25.024299 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-19 07:10:25.024305 | orchestrator | Friday 19 September 2025 07:05:09 +0000 (0:00:01.004) 0:01:16.719 ****** 2025-09-19 07:10:25.024311 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.024317 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.024322 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.024328 | orchestrator | 2025-09-19 07:10:25.024334 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-19 07:10:25.024340 | orchestrator | Friday 19 September 2025 07:05:11 +0000 (0:00:01.759) 0:01:18.478 ****** 2025-09-19 07:10:25.024349 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.024355 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.024361 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.024366 | orchestrator | 2025-09-19 07:10:25.024372 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-19 07:10:25.024380 | orchestrator | Friday 19 September 2025 07:05:13 +0000 (0:00:02.565) 0:01:21.044 ****** 2025-09-19 07:10:25.024386 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.024392 | orchestrator | 2025-09-19 07:10:25.024398 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-19 07:10:25.024404 | orchestrator | Friday 19 September 2025 07:05:14 +0000 (0:00:01.117) 0:01:22.162 ****** 2025-09-19 07:10:25.024415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.024422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.024442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.024471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024484 | orchestrator | 2025-09-19 07:10:25.024490 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-19 07:10:25.024496 | orchestrator | Friday 19 September 2025 07:05:21 +0000 (0:00:06.659) 0:01:28.821 ****** 2025-09-19 07:10:25.024502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.024515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024528 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.024539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.024547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024563 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.024575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.024583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.024599 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.024605 | orchestrator | 2025-09-19 07:10:25.024639 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-19 07:10:25.024645 | orchestrator | Friday 19 September 2025 07:05:21 +0000 (0:00:00.581) 0:01:29.403 ****** 2025-09-19 07:10:25.024652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 07:10:25.024659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 07:10:25.024667 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.024674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 07:10:25.024680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 07:10:25.024687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 07:10:25.024693 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.024704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 07:10:25.024711 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.024717 | orchestrator | 2025-09-19 07:10:25.024722 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-19 07:10:25.024729 | orchestrator | Friday 19 September 2025 07:05:22 +0000 (0:00:00.897) 0:01:30.300 ****** 2025-09-19 07:10:25.024735 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.024741 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.024830 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.024837 | orchestrator | 2025-09-19 07:10:25.024843 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-19 07:10:25.024849 | orchestrator | Friday 19 September 2025 07:05:24 +0000 (0:00:01.265) 0:01:31.566 ****** 2025-09-19 07:10:25.024855 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.024860 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.024866 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.024872 | orchestrator | 2025-09-19 07:10:25.024878 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-19 07:10:25.024883 | orchestrator | Friday 19 September 2025 07:05:26 +0000 (0:00:02.073) 0:01:33.639 ****** 2025-09-19 07:10:25.024890 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.024896 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.024902 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.024908 | orchestrator | 2025-09-19 07:10:25.024915 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-19 07:10:25.024921 | orchestrator | Friday 19 September 2025 07:05:26 +0000 (0:00:00.315) 0:01:33.954 ****** 2025-09-19 07:10:25.024927 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.024933 | orchestrator | 2025-09-19 07:10:25.024940 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-19 07:10:25.024952 | orchestrator | Friday 19 September 2025 07:05:27 +0000 (0:00:00.919) 0:01:34.874 ****** 2025-09-19 07:10:25.024960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 07:10:25.025132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 07:10:25.025146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 07:10:25.025161 | orchestrator | 2025-09-19 07:10:25.025168 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-19 07:10:25.025174 | orchestrator | Friday 19 September 2025 07:05:30 +0000 (0:00:02.729) 0:01:37.603 ****** 2025-09-19 07:10:25.025181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 07:10:25.025187 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.025199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 07:10:25.025205 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.025213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 07:10:25.025226 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.025233 | orchestrator | 2025-09-19 07:10:25.025239 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-19 07:10:25.025245 | orchestrator | Friday 19 September 2025 07:05:31 +0000 (0:00:01.617) 0:01:39.220 ****** 2025-09-19 07:10:25.025258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 07:10:25.025267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 07:10:25.025275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 07:10:25.025283 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.025290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 07:10:25.025297 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.025304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 07:10:25.025311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 07:10:25.025318 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.025325 | orchestrator | 2025-09-19 07:10:25.025335 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-19 07:10:25.025342 | orchestrator | Friday 19 September 2025 07:05:33 +0000 (0:00:01.825) 0:01:41.046 ****** 2025-09-19 07:10:25.025349 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.025356 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.025363 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.025370 | orchestrator | 2025-09-19 07:10:25.025377 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-19 07:10:25.025384 | orchestrator | Friday 19 September 2025 07:05:34 +0000 (0:00:01.018) 0:01:42.064 ****** 2025-09-19 07:10:25.025391 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.025397 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.025403 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.025408 | orchestrator | 2025-09-19 07:10:25.025414 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-19 07:10:25.025421 | orchestrator | Friday 19 September 2025 07:05:35 +0000 (0:00:01.239) 0:01:43.303 ****** 2025-09-19 07:10:25.025433 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.025439 | orchestrator | 2025-09-19 07:10:25.025445 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-19 07:10:25.025450 | orchestrator | Friday 19 September 2025 07:05:36 +0000 (0:00:00.756) 0:01:44.060 ****** 2025-09-19 07:10:25.025462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.025469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.025614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.025643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025670 | orchestrator | 2025-09-19 07:10:25.025676 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-19 07:10:25.025682 | orchestrator | Friday 19 September 2025 07:05:41 +0000 (0:00:04.960) 0:01:49.020 ****** 2025-09-19 07:10:25.025689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.025695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025723 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.025734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.025740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025765 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.025779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.025790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.025812 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.025819 | orchestrator | 2025-09-19 07:10:25.025826 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-19 07:10:25.025832 | orchestrator | Friday 19 September 2025 07:05:42 +0000 (0:00:01.411) 0:01:50.431 ****** 2025-09-19 07:10:25.025839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 07:10:25.025848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 07:10:25.025856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 07:10:25.025869 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.025878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 07:10:25.025889 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.025895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 07:10:25.025903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 07:10:25.025909 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.025915 | orchestrator | 2025-09-19 07:10:25.025922 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-19 07:10:25.025928 | orchestrator | Friday 19 September 2025 07:05:43 +0000 (0:00:01.006) 0:01:51.438 ****** 2025-09-19 07:10:25.025936 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.025943 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.025951 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.025958 | orchestrator | 2025-09-19 07:10:25.025966 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-19 07:10:25.026049 | orchestrator | Friday 19 September 2025 07:05:45 +0000 (0:00:01.477) 0:01:52.916 ****** 2025-09-19 07:10:25.026059 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.026066 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.026072 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.026078 | orchestrator | 2025-09-19 07:10:25.026084 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-19 07:10:25.026090 | orchestrator | Friday 19 September 2025 07:05:47 +0000 (0:00:02.156) 0:01:55.072 ****** 2025-09-19 07:10:25.026137 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.026144 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.026150 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.026156 | orchestrator | 2025-09-19 07:10:25.026162 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-19 07:10:25.026168 | orchestrator | Friday 19 September 2025 07:05:48 +0000 (0:00:00.489) 0:01:55.561 ****** 2025-09-19 07:10:25.026191 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.026197 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.026203 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.026209 | orchestrator | 2025-09-19 07:10:25.026216 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-19 07:10:25.026222 | orchestrator | Friday 19 September 2025 07:05:48 +0000 (0:00:00.297) 0:01:55.858 ****** 2025-09-19 07:10:25.026228 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.026234 | orchestrator | 2025-09-19 07:10:25.026240 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-19 07:10:25.026246 | orchestrator | Friday 19 September 2025 07:05:49 +0000 (0:00:00.777) 0:01:56.636 ****** 2025-09-19 07:10:25.026253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:10:25.026268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:10:25.026308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:10:25.026315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.026733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:10:25.026820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.026836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.026873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.026885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.026910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.026922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.026996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:10:25.027083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:10:25.027096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027194 | orchestrator | 2025-09-19 07:10:25.027209 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-19 07:10:25.027222 | orchestrator | Friday 19 September 2025 07:05:53 +0000 (0:00:03.918) 0:02:00.555 ****** 2025-09-19 07:10:25.027253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:10:25.027267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:10:25.027288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027364 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.027378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:10:25.027392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:10:25.027413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:10:25.027501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:10:25.027531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027572 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.027599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.027735 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.027756 | orchestrator | 2025-09-19 07:10:25.027792 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-19 07:10:25.027805 | orchestrator | Friday 19 September 2025 07:05:53 +0000 (0:00:00.825) 0:02:01.381 ****** 2025-09-19 07:10:25.027817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 07:10:25.027830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 07:10:25.027851 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.027862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 07:10:25.027882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 07:10:25.027893 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.027904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 07:10:25.027915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 07:10:25.027926 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.027937 | orchestrator | 2025-09-19 07:10:25.027948 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-19 07:10:25.027959 | orchestrator | Friday 19 September 2025 07:05:54 +0000 (0:00:01.054) 0:02:02.435 ****** 2025-09-19 07:10:25.028025 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.028037 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.028048 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.028059 | orchestrator | 2025-09-19 07:10:25.028070 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-19 07:10:25.028081 | orchestrator | Friday 19 September 2025 07:05:56 +0000 (0:00:01.734) 0:02:04.170 ****** 2025-09-19 07:10:25.028092 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.028102 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.028113 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.028124 | orchestrator | 2025-09-19 07:10:25.028135 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-19 07:10:25.028145 | orchestrator | Friday 19 September 2025 07:05:58 +0000 (0:00:01.806) 0:02:05.977 ****** 2025-09-19 07:10:25.028156 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.028167 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.028178 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.028188 | orchestrator | 2025-09-19 07:10:25.028199 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-19 07:10:25.028210 | orchestrator | Friday 19 September 2025 07:05:59 +0000 (0:00:00.529) 0:02:06.506 ****** 2025-09-19 07:10:25.028221 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.028232 | orchestrator | 2025-09-19 07:10:25.028243 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-19 07:10:25.028254 | orchestrator | Friday 19 September 2025 07:05:59 +0000 (0:00:00.790) 0:02:07.296 ****** 2025-09-19 07:10:25.028273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:10:25.028311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:10:25.028331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 07:10:25.028360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 07:10:25.028373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:10:25.028416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 07:10:25.028437 | orchestrator | 2025-09-19 07:10:25.028448 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-19 07:10:25.028477 | orchestrator | Friday 19 September 2025 07:06:04 +0000 (0:00:04.314) 0:02:11.611 ****** 2025-09-19 07:10:25.028489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:10:25.028513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 07:10:25.028532 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.028545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:10:25.028563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 07:10:25.028581 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.028615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:10:25.028634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 07:10:25.028653 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.028664 | orchestrator | 2025-09-19 07:10:25.028675 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-19 07:10:25.028686 | orchestrator | Friday 19 September 2025 07:06:07 +0000 (0:00:03.198) 0:02:14.809 ****** 2025-09-19 07:10:25.028697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 07:10:25.028716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 07:10:25.028728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 07:10:25.028739 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.028751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 07:10:25.028762 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.028774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 07:10:25.028785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 07:10:25.028802 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.028813 | orchestrator | 2025-09-19 07:10:25.028824 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-19 07:10:25.028835 | orchestrator | Friday 19 September 2025 07:06:10 +0000 (0:00:03.086) 0:02:17.896 ****** 2025-09-19 07:10:25.028846 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.028857 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.028873 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.028884 | orchestrator | 2025-09-19 07:10:25.028895 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-19 07:10:25.028906 | orchestrator | Friday 19 September 2025 07:06:11 +0000 (0:00:01.184) 0:02:19.080 ****** 2025-09-19 07:10:25.028917 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.028927 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.028938 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.028949 | orchestrator | 2025-09-19 07:10:25.028960 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-19 07:10:25.028990 | orchestrator | Friday 19 September 2025 07:06:13 +0000 (0:00:01.959) 0:02:21.040 ****** 2025-09-19 07:10:25.029002 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.029013 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.029023 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.029034 | orchestrator | 2025-09-19 07:10:25.029045 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-19 07:10:25.029056 | orchestrator | Friday 19 September 2025 07:06:14 +0000 (0:00:00.486) 0:02:21.527 ****** 2025-09-19 07:10:25.029067 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.029077 | orchestrator | 2025-09-19 07:10:25.029088 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-19 07:10:25.029099 | orchestrator | Friday 19 September 2025 07:06:14 +0000 (0:00:00.879) 0:02:22.406 ****** 2025-09-19 07:10:25.029119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:10:25.029132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:10:25.029144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:10:25.029162 | orchestrator | 2025-09-19 07:10:25.029173 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-19 07:10:25.029184 | orchestrator | Friday 19 September 2025 07:06:18 +0000 (0:00:03.448) 0:02:25.855 ****** 2025-09-19 07:10:25.029212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:10:25.029229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:10:25.029242 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.029253 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.029264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:10:25.029289 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.029315 | orchestrator | 2025-09-19 07:10:25.029333 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-19 07:10:25.029345 | orchestrator | Friday 19 September 2025 07:06:19 +0000 (0:00:00.752) 0:02:26.608 ****** 2025-09-19 07:10:25.029356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 07:10:25.029367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 07:10:25.029379 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.029390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 07:10:25.029401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 07:10:25.029422 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.029433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 07:10:25.029445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 07:10:25.029456 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.029467 | orchestrator | 2025-09-19 07:10:25.029478 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-19 07:10:25.029489 | orchestrator | Friday 19 September 2025 07:06:19 +0000 (0:00:00.682) 0:02:27.290 ****** 2025-09-19 07:10:25.029500 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.029511 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.029523 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.029534 | orchestrator | 2025-09-19 07:10:25.029545 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-19 07:10:25.029556 | orchestrator | Friday 19 September 2025 07:06:21 +0000 (0:00:01.218) 0:02:28.508 ****** 2025-09-19 07:10:25.029567 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.029578 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.029589 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.029600 | orchestrator | 2025-09-19 07:10:25.029610 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-19 07:10:25.029621 | orchestrator | Friday 19 September 2025 07:06:23 +0000 (0:00:02.090) 0:02:30.598 ****** 2025-09-19 07:10:25.029632 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.029643 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.029654 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.029664 | orchestrator | 2025-09-19 07:10:25.029675 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-19 07:10:25.029686 | orchestrator | Friday 19 September 2025 07:06:23 +0000 (0:00:00.567) 0:02:31.166 ****** 2025-09-19 07:10:25.029698 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.029709 | orchestrator | 2025-09-19 07:10:25.029720 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-19 07:10:25.029731 | orchestrator | Friday 19 September 2025 07:06:24 +0000 (0:00:00.955) 0:02:32.122 ****** 2025-09-19 07:10:25.029758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:10:25.029784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:10:25.029805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:10:25.029826 | orchestrator | 2025-09-19 07:10:25.029838 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-19 07:10:25.029849 | orchestrator | Friday 19 September 2025 07:06:28 +0000 (0:00:04.266) 0:02:36.389 ****** 2025-09-19 07:10:25.029866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:10:25.029879 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.029900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:10:25.029919 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.029952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:10:25.030121 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.030181 | orchestrator | 2025-09-19 07:10:25.030218 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-19 07:10:25.030240 | orchestrator | Friday 19 September 2025 07:06:30 +0000 (0:00:01.315) 0:02:37.705 ****** 2025-09-19 07:10:25.030283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 07:10:25.030309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 07:10:25.030331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 07:10:25.030355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 07:10:25.030375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 07:10:25.030395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 07:10:25.030416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 07:10:25.030436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 07:10:25.030458 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.030480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 07:10:25.030509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 07:10:25.030529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 07:10:25.030549 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.030570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 07:10:25.030602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 07:10:25.030634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 07:10:25.030651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 07:10:25.030663 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.030674 | orchestrator | 2025-09-19 07:10:25.030685 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-19 07:10:25.030696 | orchestrator | Friday 19 September 2025 07:06:31 +0000 (0:00:00.984) 0:02:38.689 ****** 2025-09-19 07:10:25.030707 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.030718 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.030728 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.030739 | orchestrator | 2025-09-19 07:10:25.030750 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-19 07:10:25.030761 | orchestrator | Friday 19 September 2025 07:06:32 +0000 (0:00:01.270) 0:02:39.960 ****** 2025-09-19 07:10:25.030792 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.030805 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.030816 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.030826 | orchestrator | 2025-09-19 07:10:25.030837 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-19 07:10:25.030848 | orchestrator | Friday 19 September 2025 07:06:34 +0000 (0:00:02.285) 0:02:42.245 ****** 2025-09-19 07:10:25.030860 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.030871 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.030881 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.030892 | orchestrator | 2025-09-19 07:10:25.030902 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-19 07:10:25.030913 | orchestrator | Friday 19 September 2025 07:06:35 +0000 (0:00:00.337) 0:02:42.582 ****** 2025-09-19 07:10:25.030924 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.030935 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.030945 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.030956 | orchestrator | 2025-09-19 07:10:25.030966 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-19 07:10:25.030995 | orchestrator | Friday 19 September 2025 07:06:35 +0000 (0:00:00.573) 0:02:43.156 ****** 2025-09-19 07:10:25.031007 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.031017 | orchestrator | 2025-09-19 07:10:25.031028 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-19 07:10:25.031039 | orchestrator | Friday 19 September 2025 07:06:36 +0000 (0:00:01.106) 0:02:44.263 ****** 2025-09-19 07:10:25.031052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:10:25.031077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:10:25.031090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:10:25.031110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:10:25.031123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:10:25.031135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:10:25.031158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:10:25.031171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:10:25.031189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:10:25.031201 | orchestrator | 2025-09-19 07:10:25.031212 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-19 07:10:25.031223 | orchestrator | Friday 19 September 2025 07:06:40 +0000 (0:00:03.746) 0:02:48.010 ****** 2025-09-19 07:10:25.031235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:10:25.031247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:10:25.031269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:10:25.031280 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.031296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:10:25.031315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:10:25.031327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:10:25.031338 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.031350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:10:25.031368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:10:25.031384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:10:25.031395 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.031406 | orchestrator | 2025-09-19 07:10:25.031417 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-19 07:10:25.031428 | orchestrator | Friday 19 September 2025 07:06:41 +0000 (0:00:00.912) 0:02:48.922 ****** 2025-09-19 07:10:25.031439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 07:10:25.031452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 07:10:25.031463 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.031480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 07:10:25.031492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 07:10:25.031504 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.031515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 07:10:25.031526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 07:10:25.031537 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.031547 | orchestrator | 2025-09-19 07:10:25.031558 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-19 07:10:25.031569 | orchestrator | Friday 19 September 2025 07:06:42 +0000 (0:00:00.882) 0:02:49.804 ****** 2025-09-19 07:10:25.031587 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.031598 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.031609 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.031619 | orchestrator | 2025-09-19 07:10:25.031630 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-19 07:10:25.031641 | orchestrator | Friday 19 September 2025 07:06:43 +0000 (0:00:01.218) 0:02:51.022 ****** 2025-09-19 07:10:25.031651 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.031662 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.031673 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.031684 | orchestrator | 2025-09-19 07:10:25.031695 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-19 07:10:25.031705 | orchestrator | Friday 19 September 2025 07:06:45 +0000 (0:00:02.337) 0:02:53.360 ****** 2025-09-19 07:10:25.031716 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.031727 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.031737 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.031748 | orchestrator | 2025-09-19 07:10:25.031759 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-19 07:10:25.031770 | orchestrator | Friday 19 September 2025 07:06:46 +0000 (0:00:00.605) 0:02:53.965 ****** 2025-09-19 07:10:25.031780 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.031791 | orchestrator | 2025-09-19 07:10:25.031802 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-19 07:10:25.031813 | orchestrator | Friday 19 September 2025 07:06:47 +0000 (0:00:01.055) 0:02:55.021 ****** 2025-09-19 07:10:25.031828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:10:25.031841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.031860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:10:25.031884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.031897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:10:25.031913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.031924 | orchestrator | 2025-09-19 07:10:25.031935 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-19 07:10:25.031946 | orchestrator | Friday 19 September 2025 07:06:51 +0000 (0:00:04.432) 0:02:59.453 ****** 2025-09-19 07:10:25.031957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:10:25.031998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032019 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.032030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:10:25.032042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032053 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.032068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:10:25.032080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032098 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.032109 | orchestrator | 2025-09-19 07:10:25.032126 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-19 07:10:25.032137 | orchestrator | Friday 19 September 2025 07:06:52 +0000 (0:00:00.870) 0:03:00.324 ****** 2025-09-19 07:10:25.032149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 07:10:25.032160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 07:10:25.032172 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.032183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 07:10:25.032194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 07:10:25.032205 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.032216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 07:10:25.032227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 07:10:25.032238 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.032249 | orchestrator | 2025-09-19 07:10:25.032260 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-19 07:10:25.032271 | orchestrator | Friday 19 September 2025 07:06:53 +0000 (0:00:00.977) 0:03:01.301 ****** 2025-09-19 07:10:25.032282 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.032293 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.032304 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.032314 | orchestrator | 2025-09-19 07:10:25.032325 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-19 07:10:25.032336 | orchestrator | Friday 19 September 2025 07:06:55 +0000 (0:00:01.255) 0:03:02.557 ****** 2025-09-19 07:10:25.032347 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.032358 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.032369 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.032379 | orchestrator | 2025-09-19 07:10:25.032390 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-19 07:10:25.032401 | orchestrator | Friday 19 September 2025 07:06:57 +0000 (0:00:02.226) 0:03:04.783 ****** 2025-09-19 07:10:25.032411 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.032422 | orchestrator | 2025-09-19 07:10:25.032433 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-19 07:10:25.032444 | orchestrator | Friday 19 September 2025 07:06:58 +0000 (0:00:01.474) 0:03:06.258 ****** 2025-09-19 07:10:25.032455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 07:10:25.032474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 07:10:25.032537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 07:10:25.032613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032647 | orchestrator | 2025-09-19 07:10:25.032658 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-19 07:10:25.032670 | orchestrator | Friday 19 September 2025 07:07:02 +0000 (0:00:03.765) 0:03:10.023 ****** 2025-09-19 07:10:25.032689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 07:10:25.032709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032749 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.032761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 07:10:25.032773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032818 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.032836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 07:10:25.032848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.032890 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.032902 | orchestrator | 2025-09-19 07:10:25.032913 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-19 07:10:25.032925 | orchestrator | Friday 19 September 2025 07:07:03 +0000 (0:00:00.729) 0:03:10.753 ****** 2025-09-19 07:10:25.032936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 07:10:25.032952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 07:10:25.032963 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.033025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 07:10:25.033037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 07:10:25.033048 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.033060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 07:10:25.033071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 07:10:25.033082 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.033093 | orchestrator | 2025-09-19 07:10:25.033105 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-19 07:10:25.033116 | orchestrator | Friday 19 September 2025 07:07:04 +0000 (0:00:01.348) 0:03:12.102 ****** 2025-09-19 07:10:25.033134 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.033146 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.033157 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.033168 | orchestrator | 2025-09-19 07:10:25.033179 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-19 07:10:25.033190 | orchestrator | Friday 19 September 2025 07:07:06 +0000 (0:00:01.435) 0:03:13.537 ****** 2025-09-19 07:10:25.033201 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.033212 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.033224 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.033235 | orchestrator | 2025-09-19 07:10:25.033246 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-19 07:10:25.033257 | orchestrator | Friday 19 September 2025 07:07:08 +0000 (0:00:02.273) 0:03:15.811 ****** 2025-09-19 07:10:25.033268 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.033279 | orchestrator | 2025-09-19 07:10:25.033288 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-19 07:10:25.033298 | orchestrator | Friday 19 September 2025 07:07:09 +0000 (0:00:01.433) 0:03:17.244 ****** 2025-09-19 07:10:25.033308 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 07:10:25.033318 | orchestrator | 2025-09-19 07:10:25.033327 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-19 07:10:25.033337 | orchestrator | Friday 19 September 2025 07:07:12 +0000 (0:00:02.918) 0:03:20.162 ****** 2025-09-19 07:10:25.033348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:10:25.033371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 07:10:25.033382 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.033399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:10:25.033417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 07:10:25.033427 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.033442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:10:25.033460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 07:10:25.033471 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.033481 | orchestrator | 2025-09-19 07:10:25.033490 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-19 07:10:25.033501 | orchestrator | Friday 19 September 2025 07:07:14 +0000 (0:00:02.225) 0:03:22.388 ****** 2025-09-19 07:10:25.033511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:10:25.033527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 07:10:25.033538 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.033560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:10:25.033571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 07:10:25.033588 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.033602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:10:25.033614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 07:10:25.033623 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.033651 | orchestrator | 2025-09-19 07:10:25.033661 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-19 07:10:25.033671 | orchestrator | Friday 19 September 2025 07:07:17 +0000 (0:00:02.418) 0:03:24.806 ****** 2025-09-19 07:10:25.033701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 07:10:25.033713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 07:10:25.033729 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.033739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 07:10:25.033749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 07:10:25.033759 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.033769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 07:10:25.033787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 07:10:25.033797 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.033806 | orchestrator | 2025-09-19 07:10:25.033816 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-19 07:10:25.033826 | orchestrator | Friday 19 September 2025 07:07:20 +0000 (0:00:02.976) 0:03:27.782 ****** 2025-09-19 07:10:25.033836 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.033845 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.033855 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.033865 | orchestrator | 2025-09-19 07:10:25.033874 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-19 07:10:25.033884 | orchestrator | Friday 19 September 2025 07:07:22 +0000 (0:00:01.954) 0:03:29.737 ****** 2025-09-19 07:10:25.033894 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.033903 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.033913 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.033922 | orchestrator | 2025-09-19 07:10:25.033933 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-19 07:10:25.033950 | orchestrator | Friday 19 September 2025 07:07:23 +0000 (0:00:01.589) 0:03:31.326 ****** 2025-09-19 07:10:25.033966 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.033993 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.034003 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.034012 | orchestrator | 2025-09-19 07:10:25.034067 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-19 07:10:25.034077 | orchestrator | Friday 19 September 2025 07:07:24 +0000 (0:00:00.354) 0:03:31.681 ****** 2025-09-19 07:10:25.034087 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.034097 | orchestrator | 2025-09-19 07:10:25.034107 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-19 07:10:25.034116 | orchestrator | Friday 19 September 2025 07:07:25 +0000 (0:00:01.542) 0:03:33.223 ****** 2025-09-19 07:10:25.034139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 07:10:25.034151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 07:10:25.034161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 07:10:25.034172 | orchestrator | 2025-09-19 07:10:25.034181 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-19 07:10:25.034196 | orchestrator | Friday 19 September 2025 07:07:27 +0000 (0:00:01.608) 0:03:34.831 ****** 2025-09-19 07:10:25.034221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 07:10:25.034253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 07:10:25.034264 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.034274 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.034284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 07:10:25.034294 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.034304 | orchestrator | 2025-09-19 07:10:25.034313 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-19 07:10:25.034323 | orchestrator | Friday 19 September 2025 07:07:27 +0000 (0:00:00.408) 0:03:35.240 ****** 2025-09-19 07:10:25.034334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 07:10:25.034356 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.034367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 07:10:25.034377 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.034388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 07:10:25.034398 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.034407 | orchestrator | 2025-09-19 07:10:25.034418 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-19 07:10:25.034427 | orchestrator | Friday 19 September 2025 07:07:28 +0000 (0:00:01.004) 0:03:36.244 ****** 2025-09-19 07:10:25.034437 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.034447 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.034456 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.034465 | orchestrator | 2025-09-19 07:10:25.034475 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-19 07:10:25.034485 | orchestrator | Friday 19 September 2025 07:07:29 +0000 (0:00:00.500) 0:03:36.744 ****** 2025-09-19 07:10:25.034502 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.034512 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.034526 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.034536 | orchestrator | 2025-09-19 07:10:25.034546 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-19 07:10:25.034555 | orchestrator | Friday 19 September 2025 07:07:30 +0000 (0:00:01.420) 0:03:38.165 ****** 2025-09-19 07:10:25.034566 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.034586 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.034596 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.034606 | orchestrator | 2025-09-19 07:10:25.034616 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-19 07:10:25.034625 | orchestrator | Friday 19 September 2025 07:07:31 +0000 (0:00:00.357) 0:03:38.522 ****** 2025-09-19 07:10:25.034635 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.034645 | orchestrator | 2025-09-19 07:10:25.034654 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-19 07:10:25.034664 | orchestrator | Friday 19 September 2025 07:07:32 +0000 (0:00:01.453) 0:03:39.976 ****** 2025-09-19 07:10:25.034681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:10:25.034692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.034703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.034714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.034739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 07:10:25.034750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.034767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:10:25.034777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.034788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.034806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.034820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.034831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.034846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.034857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:10:25.034868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 07:10:25.034884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.034899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 07:10:25.034909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.034925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.034936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.034946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.034957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.034994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 07:10:25.035006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:10:25.035021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:10:25.035042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:10:25.035057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 07:10:25.035162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 07:10:25.035190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:10:25.035211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:10:25.035298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 07:10:25.035360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:10:25.035370 | orchestrator | 2025-09-19 07:10:25.035380 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-19 07:10:25.035390 | orchestrator | Friday 19 September 2025 07:07:36 +0000 (0:00:04.329) 0:03:44.305 ****** 2025-09-19 07:10:25.035407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:10:25.035417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 07:10:25.035468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:10:25.035535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:10:25.035560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 07:10:25.035654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 07:10:25.035664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:10:25.035674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:10:25.035723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035733 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.035743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 07:10:25.035811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:10:25.035821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.035899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:10:25.035930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 07:10:25.035940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.035955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:10:25.036088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 07:10:25.036114 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.036125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 07:10:25.036135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.036146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 07:10:25.036160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:10:25.036171 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.036187 | orchestrator | 2025-09-19 07:10:25.036197 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-19 07:10:25.036208 | orchestrator | Friday 19 September 2025 07:07:38 +0000 (0:00:01.466) 0:03:45.771 ****** 2025-09-19 07:10:25.036218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 07:10:25.036229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 07:10:25.036240 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.036258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 07:10:25.036269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 07:10:25.036279 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.036289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 07:10:25.036299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 07:10:25.036309 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.036319 | orchestrator | 2025-09-19 07:10:25.036328 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-19 07:10:25.036338 | orchestrator | Friday 19 September 2025 07:07:40 +0000 (0:00:02.068) 0:03:47.839 ****** 2025-09-19 07:10:25.036349 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.036358 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.036368 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.036378 | orchestrator | 2025-09-19 07:10:25.036387 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-19 07:10:25.036397 | orchestrator | Friday 19 September 2025 07:07:41 +0000 (0:00:01.237) 0:03:49.076 ****** 2025-09-19 07:10:25.036407 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.036417 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.036426 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.036436 | orchestrator | 2025-09-19 07:10:25.036446 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-19 07:10:25.036456 | orchestrator | Friday 19 September 2025 07:07:43 +0000 (0:00:01.974) 0:03:51.050 ****** 2025-09-19 07:10:25.036465 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.036475 | orchestrator | 2025-09-19 07:10:25.036485 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-19 07:10:25.036494 | orchestrator | Friday 19 September 2025 07:07:44 +0000 (0:00:01.239) 0:03:52.290 ****** 2025-09-19 07:10:25.036505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.036526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.036541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.036550 | orchestrator | 2025-09-19 07:10:25.036558 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-19 07:10:25.036566 | orchestrator | Friday 19 September 2025 07:07:48 +0000 (0:00:03.809) 0:03:56.100 ****** 2025-09-19 07:10:25.036575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.036584 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.036592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.036605 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.036617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.036625 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.036648 | orchestrator | 2025-09-19 07:10:25.036668 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-19 07:10:25.036676 | orchestrator | Friday 19 September 2025 07:07:49 +0000 (0:00:00.518) 0:03:56.618 ****** 2025-09-19 07:10:25.036684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 07:10:25.036693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 07:10:25.036702 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.036715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placemen2025-09-19 07:10:25 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:25.036724 | orchestrator | 2025-09-19 07:10:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:25.036732 | orchestrator | t_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 07:10:25.036740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 07:10:25.036749 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.036757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 07:10:25.036765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 07:10:25.036773 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.036781 | orchestrator | 2025-09-19 07:10:25.036789 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-19 07:10:25.036797 | orchestrator | Friday 19 September 2025 07:07:49 +0000 (0:00:00.782) 0:03:57.401 ****** 2025-09-19 07:10:25.036805 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.036818 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.036832 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.036846 | orchestrator | 2025-09-19 07:10:25.036860 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-19 07:10:25.036873 | orchestrator | Friday 19 September 2025 07:07:51 +0000 (0:00:02.043) 0:03:59.445 ****** 2025-09-19 07:10:25.036894 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.036908 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.036921 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.036935 | orchestrator | 2025-09-19 07:10:25.036949 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-19 07:10:25.036964 | orchestrator | Friday 19 September 2025 07:07:53 +0000 (0:00:01.905) 0:04:01.350 ****** 2025-09-19 07:10:25.036996 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.037009 | orchestrator | 2025-09-19 07:10:25.037022 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-19 07:10:25.037036 | orchestrator | Friday 19 September 2025 07:07:55 +0000 (0:00:01.661) 0:04:03.012 ****** 2025-09-19 07:10:25.037051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.037061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.037077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.037087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.037102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.037114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.037124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.037138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.037147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.037161 | orchestrator | 2025-09-19 07:10:25.037169 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-19 07:10:25.037177 | orchestrator | Friday 19 September 2025 07:07:59 +0000 (0:00:04.326) 0:04:07.338 ****** 2025-09-19 07:10:25.037186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.037199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.037208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.037216 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.037230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.037245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.037253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.037262 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.037277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.037286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.037299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.037307 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.037320 | orchestrator | 2025-09-19 07:10:25.037328 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-19 07:10:25.037336 | orchestrator | Friday 19 September 2025 07:08:01 +0000 (0:00:01.381) 0:04:08.720 ****** 2025-09-19 07:10:25.037345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 07:10:25.037353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 07:10:25.037375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 07:10:25.037384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 07:10:25.037392 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.037400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 07:10:25.037408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 07:10:25.037417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 07:10:25.037425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 07:10:25.037433 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.037441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 07:10:25.037453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 07:10:25.037461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 07:10:25.037470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 07:10:25.037478 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.037486 | orchestrator | 2025-09-19 07:10:25.037494 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-19 07:10:25.037502 | orchestrator | Friday 19 September 2025 07:08:02 +0000 (0:00:00.979) 0:04:09.699 ****** 2025-09-19 07:10:25.037522 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.037530 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.037538 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.037546 | orchestrator | 2025-09-19 07:10:25.037554 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-19 07:10:25.037562 | orchestrator | Friday 19 September 2025 07:08:03 +0000 (0:00:01.399) 0:04:11.099 ****** 2025-09-19 07:10:25.037575 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.037583 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.037591 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.037599 | orchestrator | 2025-09-19 07:10:25.037607 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-19 07:10:25.037620 | orchestrator | Friday 19 September 2025 07:08:05 +0000 (0:00:02.181) 0:04:13.280 ****** 2025-09-19 07:10:25.037628 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.037636 | orchestrator | 2025-09-19 07:10:25.037644 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-19 07:10:25.037652 | orchestrator | Friday 19 September 2025 07:08:07 +0000 (0:00:01.600) 0:04:14.881 ****** 2025-09-19 07:10:25.037660 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-19 07:10:25.037668 | orchestrator | 2025-09-19 07:10:25.037676 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-19 07:10:25.037684 | orchestrator | Friday 19 September 2025 07:08:08 +0000 (0:00:00.888) 0:04:15.770 ****** 2025-09-19 07:10:25.037693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 07:10:25.037701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 07:10:25.037710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 07:10:25.037718 | orchestrator | 2025-09-19 07:10:25.037726 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-19 07:10:25.037735 | orchestrator | Friday 19 September 2025 07:08:12 +0000 (0:00:04.121) 0:04:19.892 ****** 2025-09-19 07:10:25.037743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:10:25.037756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:10:25.037769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:10:25.037777 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.037785 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.037793 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.037801 | orchestrator | 2025-09-19 07:10:25.037810 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-19 07:10:25.037822 | orchestrator | Friday 19 September 2025 07:08:13 +0000 (0:00:01.397) 0:04:21.290 ****** 2025-09-19 07:10:25.037831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 07:10:25.037840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 07:10:25.037848 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.037857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 07:10:25.037865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 07:10:25.037873 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.037882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 07:10:25.037890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 07:10:25.037899 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.037907 | orchestrator | 2025-09-19 07:10:25.037915 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 07:10:25.037923 | orchestrator | Friday 19 September 2025 07:08:15 +0000 (0:00:01.614) 0:04:22.904 ****** 2025-09-19 07:10:25.037931 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.037940 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.037948 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.037956 | orchestrator | 2025-09-19 07:10:25.037965 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 07:10:25.037994 | orchestrator | Friday 19 September 2025 07:08:18 +0000 (0:00:02.611) 0:04:25.516 ****** 2025-09-19 07:10:25.038003 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.038011 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.038049 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.038071 | orchestrator | 2025-09-19 07:10:25.038079 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-19 07:10:25.038087 | orchestrator | Friday 19 September 2025 07:08:21 +0000 (0:00:03.060) 0:04:28.576 ****** 2025-09-19 07:10:25.038102 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-19 07:10:25.038110 | orchestrator | 2025-09-19 07:10:25.038118 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-19 07:10:25.038125 | orchestrator | Friday 19 September 2025 07:08:22 +0000 (0:00:01.505) 0:04:30.081 ****** 2025-09-19 07:10:25.038138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:10:25.038157 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.038175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:10:25.038183 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.038209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:10:25.038218 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.038226 | orchestrator | 2025-09-19 07:10:25.038234 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-19 07:10:25.038242 | orchestrator | Friday 19 September 2025 07:08:24 +0000 (0:00:01.445) 0:04:31.526 ****** 2025-09-19 07:10:25.038250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:10:25.038258 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.038266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:10:25.038274 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.038283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 07:10:25.038297 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.038305 | orchestrator | 2025-09-19 07:10:25.038313 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-19 07:10:25.038321 | orchestrator | Friday 19 September 2025 07:08:25 +0000 (0:00:01.426) 0:04:32.953 ****** 2025-09-19 07:10:25.038328 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.038336 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.038344 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.038352 | orchestrator | 2025-09-19 07:10:25.038360 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 07:10:25.038368 | orchestrator | Friday 19 September 2025 07:08:27 +0000 (0:00:01.967) 0:04:34.920 ****** 2025-09-19 07:10:25.038376 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.038384 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.038392 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.038400 | orchestrator | 2025-09-19 07:10:25.038407 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 07:10:25.038419 | orchestrator | Friday 19 September 2025 07:08:29 +0000 (0:00:02.349) 0:04:37.270 ****** 2025-09-19 07:10:25.038427 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.038435 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.038443 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.038450 | orchestrator | 2025-09-19 07:10:25.038458 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-19 07:10:25.038466 | orchestrator | Friday 19 September 2025 07:08:32 +0000 (0:00:03.169) 0:04:40.439 ****** 2025-09-19 07:10:25.038474 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-19 07:10:25.038482 | orchestrator | 2025-09-19 07:10:25.038490 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-19 07:10:25.038498 | orchestrator | Friday 19 September 2025 07:08:33 +0000 (0:00:00.899) 0:04:41.338 ****** 2025-09-19 07:10:25.038506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 07:10:25.038514 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.038534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 07:10:25.038543 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.038551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 07:10:25.038571 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.038589 | orchestrator | 2025-09-19 07:10:25.038607 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-19 07:10:25.038619 | orchestrator | Friday 19 September 2025 07:08:35 +0000 (0:00:01.508) 0:04:42.846 ****** 2025-09-19 07:10:25.038634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 07:10:25.038647 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.038660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 07:10:25.038672 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.038690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 07:10:25.038703 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.038736 | orchestrator | 2025-09-19 07:10:25.038750 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-19 07:10:25.038762 | orchestrator | Friday 19 September 2025 07:08:36 +0000 (0:00:01.477) 0:04:44.324 ****** 2025-09-19 07:10:25.038775 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.038787 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.038800 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.038827 | orchestrator | 2025-09-19 07:10:25.038841 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 07:10:25.038852 | orchestrator | Friday 19 September 2025 07:08:38 +0000 (0:00:01.606) 0:04:45.931 ****** 2025-09-19 07:10:25.038863 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.038876 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.038888 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.038901 | orchestrator | 2025-09-19 07:10:25.038913 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 07:10:25.038926 | orchestrator | Friday 19 September 2025 07:08:40 +0000 (0:00:02.450) 0:04:48.381 ****** 2025-09-19 07:10:25.038939 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.038951 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.038959 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.039017 | orchestrator | 2025-09-19 07:10:25.039028 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-19 07:10:25.039036 | orchestrator | Friday 19 September 2025 07:08:44 +0000 (0:00:03.458) 0:04:51.839 ****** 2025-09-19 07:10:25.039044 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.039060 | orchestrator | 2025-09-19 07:10:25.039076 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-19 07:10:25.039085 | orchestrator | Friday 19 September 2025 07:08:45 +0000 (0:00:01.603) 0:04:53.442 ****** 2025-09-19 07:10:25.039093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.039102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:10:25.039111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.039120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.039133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.039147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.039160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:10:25.039169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.039177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.039185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.039201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.039209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:10:25.039226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.039235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.039243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.039251 | orchestrator | 2025-09-19 07:10:25.039259 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-19 07:10:25.039267 | orchestrator | Friday 19 September 2025 07:08:49 +0000 (0:00:03.633) 0:04:57.076 ****** 2025-09-19 07:10:25.039276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.039288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:10:25.039296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.039315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.039324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.039332 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.039340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.039349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:10:25.039361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.039368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.039385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.039392 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.039399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.039406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:10:25.039413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.039423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:10:25.039430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:10:25.039441 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.039448 | orchestrator | 2025-09-19 07:10:25.039455 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-19 07:10:25.039462 | orchestrator | Friday 19 September 2025 07:08:50 +0000 (0:00:00.722) 0:04:57.798 ****** 2025-09-19 07:10:25.039469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 07:10:25.039480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 07:10:25.039487 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.039494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 07:10:25.039500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 07:10:25.039507 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.039514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 07:10:25.039521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 07:10:25.039527 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.039534 | orchestrator | 2025-09-19 07:10:25.039541 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-19 07:10:25.039547 | orchestrator | Friday 19 September 2025 07:08:51 +0000 (0:00:01.566) 0:04:59.365 ****** 2025-09-19 07:10:25.039554 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.039560 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.039567 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.039574 | orchestrator | 2025-09-19 07:10:25.039581 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-19 07:10:25.039588 | orchestrator | Friday 19 September 2025 07:08:53 +0000 (0:00:01.422) 0:05:00.787 ****** 2025-09-19 07:10:25.039594 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.039601 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.039607 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.039614 | orchestrator | 2025-09-19 07:10:25.039621 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-19 07:10:25.039627 | orchestrator | Friday 19 September 2025 07:08:55 +0000 (0:00:02.134) 0:05:02.922 ****** 2025-09-19 07:10:25.039634 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.039640 | orchestrator | 2025-09-19 07:10:25.039647 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-19 07:10:25.039654 | orchestrator | Friday 19 September 2025 07:08:56 +0000 (0:00:01.410) 0:05:04.333 ****** 2025-09-19 07:10:25.039665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:10:25.039676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:10:25.039688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:10:25.039696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:10:25.039704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:10:25.039720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:10:25.039728 | orchestrator | 2025-09-19 07:10:25.039735 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-19 07:10:25.039742 | orchestrator | Friday 19 September 2025 07:09:02 +0000 (0:00:05.580) 0:05:09.913 ****** 2025-09-19 07:10:25.039753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:10:25.039761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:10:25.039768 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.039775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:10:25.039789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:10:25.039797 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.039809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:10:25.039816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:10:25.039823 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.039830 | orchestrator | 2025-09-19 07:10:25.039837 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-19 07:10:25.039847 | orchestrator | Friday 19 September 2025 07:09:03 +0000 (0:00:00.682) 0:05:10.596 ****** 2025-09-19 07:10:25.039854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 07:10:25.039861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 07:10:25.039868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 07:10:25.039875 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.039882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 07:10:25.039889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 07:10:25.039901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 07:10:25.039908 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.039918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 07:10:25.039930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 07:10:25.039942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 07:10:25.039953 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.039964 | orchestrator | 2025-09-19 07:10:25.039993 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-19 07:10:25.040004 | orchestrator | Friday 19 September 2025 07:09:04 +0000 (0:00:00.947) 0:05:11.543 ****** 2025-09-19 07:10:25.040014 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.040023 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.040033 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.040044 | orchestrator | 2025-09-19 07:10:25.040054 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-19 07:10:25.040071 | orchestrator | Friday 19 September 2025 07:09:04 +0000 (0:00:00.836) 0:05:12.379 ****** 2025-09-19 07:10:25.040081 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.040091 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.040100 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.040110 | orchestrator | 2025-09-19 07:10:25.040120 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-19 07:10:25.040130 | orchestrator | Friday 19 September 2025 07:09:06 +0000 (0:00:01.382) 0:05:13.762 ****** 2025-09-19 07:10:25.040140 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.040150 | orchestrator | 2025-09-19 07:10:25.040161 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-19 07:10:25.040171 | orchestrator | Friday 19 September 2025 07:09:07 +0000 (0:00:01.434) 0:05:15.196 ****** 2025-09-19 07:10:25.040193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 07:10:25.040205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:10:25.040218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:10:25.040264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 07:10:25.040271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 07:10:25.040284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:10:25.040291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:10:25.040308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:10:25.040334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:10:25.040353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 07:10:25.040361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 07:10:25.040371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:10:25.040402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 07:10:25.040409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 07:10:25.040417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:10:25.040446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 07:10:25.040459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 07:10:25.040466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:10:25.040486 | orchestrator | 2025-09-19 07:10:25.040493 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-19 07:10:25.040503 | orchestrator | Friday 19 September 2025 07:09:12 +0000 (0:00:04.668) 0:05:19.865 ****** 2025-09-19 07:10:25.040510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 07:10:25.040525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:10:25.040759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:10:25.040785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 07:10:25.040797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 07:10:25.040810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 07:10:25.040836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:10:25.040843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:10:25.040849 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.040856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:10:25.040893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 07:10:25.040904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 07:10:25.040912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 07:10:25.040919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:10:25.040937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.040954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:10:25.040962 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.041037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.041052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:10:25.041060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 07:10:25.041071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 07:10:25.041088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.041101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:10:25.041108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:10:25.041115 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.041122 | orchestrator | 2025-09-19 07:10:25.041128 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-19 07:10:25.041135 | orchestrator | Friday 19 September 2025 07:09:13 +0000 (0:00:01.280) 0:05:21.146 ****** 2025-09-19 07:10:25.041142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 07:10:25.041150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 07:10:25.041158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 07:10:25.041165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 07:10:25.041172 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.041179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 07:10:25.041191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 07:10:25.041202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 07:10:25.041209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 07:10:25.041216 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.041223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 07:10:25.041230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 07:10:25.041237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 07:10:25.041248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 07:10:25.041255 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.041262 | orchestrator | 2025-09-19 07:10:25.041268 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-19 07:10:25.041275 | orchestrator | Friday 19 September 2025 07:09:14 +0000 (0:00:01.000) 0:05:22.146 ****** 2025-09-19 07:10:25.041282 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.041288 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.041295 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.041302 | orchestrator | 2025-09-19 07:10:25.041308 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-19 07:10:25.041315 | orchestrator | Friday 19 September 2025 07:09:15 +0000 (0:00:00.480) 0:05:22.627 ****** 2025-09-19 07:10:25.041321 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.041328 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.041335 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.041341 | orchestrator | 2025-09-19 07:10:25.041348 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-19 07:10:25.041355 | orchestrator | Friday 19 September 2025 07:09:16 +0000 (0:00:01.479) 0:05:24.106 ****** 2025-09-19 07:10:25.041361 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.041368 | orchestrator | 2025-09-19 07:10:25.041375 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-19 07:10:25.041381 | orchestrator | Friday 19 September 2025 07:09:18 +0000 (0:00:01.740) 0:05:25.847 ****** 2025-09-19 07:10:25.041390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:10:25.041408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:10:25.041417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 07:10:25.041426 | orchestrator | 2025-09-19 07:10:25.041437 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-19 07:10:25.041445 | orchestrator | Friday 19 September 2025 07:09:20 +0000 (0:00:02.548) 0:05:28.395 ****** 2025-09-19 07:10:25.041453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 07:10:25.041463 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.041471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 07:10:25.041484 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.041495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 07:10:25.041504 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.041512 | orchestrator | 2025-09-19 07:10:25.041520 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-19 07:10:25.041528 | orchestrator | Friday 19 September 2025 07:09:21 +0000 (0:00:00.433) 0:05:28.829 ****** 2025-09-19 07:10:25.041535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 07:10:25.041544 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.041551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 07:10:25.041559 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.041567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 07:10:25.041574 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.041582 | orchestrator | 2025-09-19 07:10:25.041590 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-19 07:10:25.041597 | orchestrator | Friday 19 September 2025 07:09:22 +0000 (0:00:01.146) 0:05:29.976 ****** 2025-09-19 07:10:25.041609 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.041617 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.041625 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.041633 | orchestrator | 2025-09-19 07:10:25.041641 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-19 07:10:25.041649 | orchestrator | Friday 19 September 2025 07:09:22 +0000 (0:00:00.443) 0:05:30.419 ****** 2025-09-19 07:10:25.041657 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.041665 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.041671 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.041678 | orchestrator | 2025-09-19 07:10:25.041685 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-19 07:10:25.041696 | orchestrator | Friday 19 September 2025 07:09:24 +0000 (0:00:01.400) 0:05:31.819 ****** 2025-09-19 07:10:25.041702 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:10:25.041709 | orchestrator | 2025-09-19 07:10:25.041716 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-19 07:10:25.041722 | orchestrator | Friday 19 September 2025 07:09:26 +0000 (0:00:01.764) 0:05:33.584 ****** 2025-09-19 07:10:25.041730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.041737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.041748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.041759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.041771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.041778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 07:10:25.041785 | orchestrator | 2025-09-19 07:10:25.041792 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-19 07:10:25.041799 | orchestrator | Friday 19 September 2025 07:09:32 +0000 (0:00:06.652) 0:05:40.236 ****** 2025-09-19 07:10:25.041809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.041820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.041831 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.041838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.041845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.041852 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.041865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.041872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 07:10:25.041882 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.041889 | orchestrator | 2025-09-19 07:10:25.041896 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-19 07:10:25.041906 | orchestrator | Friday 19 September 2025 07:09:33 +0000 (0:00:00.676) 0:05:40.913 ****** 2025-09-19 07:10:25.041913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 07:10:25.041920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 07:10:25.041927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 07:10:25.041934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 07:10:25.041941 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.041947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 07:10:25.041954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 07:10:25.041961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 07:10:25.042054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 07:10:25.042064 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.042071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 07:10:25.042078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 07:10:25.042085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 07:10:25.042096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 07:10:25.042103 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.042110 | orchestrator | 2025-09-19 07:10:25.042116 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-19 07:10:25.042123 | orchestrator | Friday 19 September 2025 07:09:35 +0000 (0:00:01.911) 0:05:42.825 ****** 2025-09-19 07:10:25.042130 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.042136 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.042143 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.042155 | orchestrator | 2025-09-19 07:10:25.042161 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-19 07:10:25.042168 | orchestrator | Friday 19 September 2025 07:09:36 +0000 (0:00:01.394) 0:05:44.219 ****** 2025-09-19 07:10:25.042175 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.042181 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.042188 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.042195 | orchestrator | 2025-09-19 07:10:25.042201 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-19 07:10:25.042208 | orchestrator | Friday 19 September 2025 07:09:39 +0000 (0:00:02.387) 0:05:46.607 ****** 2025-09-19 07:10:25.042215 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.042222 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.042228 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.042235 | orchestrator | 2025-09-19 07:10:25.042241 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-19 07:10:25.042248 | orchestrator | Friday 19 September 2025 07:09:39 +0000 (0:00:00.347) 0:05:46.955 ****** 2025-09-19 07:10:25.042255 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.042261 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.042268 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.042274 | orchestrator | 2025-09-19 07:10:25.042281 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-19 07:10:25.042292 | orchestrator | Friday 19 September 2025 07:09:39 +0000 (0:00:00.340) 0:05:47.295 ****** 2025-09-19 07:10:25.042299 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.042306 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.042313 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.042319 | orchestrator | 2025-09-19 07:10:25.042326 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-19 07:10:25.042333 | orchestrator | Friday 19 September 2025 07:09:40 +0000 (0:00:00.696) 0:05:47.992 ****** 2025-09-19 07:10:25.042339 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.042346 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.042353 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.042359 | orchestrator | 2025-09-19 07:10:25.042366 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-19 07:10:25.042373 | orchestrator | Friday 19 September 2025 07:09:40 +0000 (0:00:00.345) 0:05:48.338 ****** 2025-09-19 07:10:25.042379 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.042386 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.042393 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.042399 | orchestrator | 2025-09-19 07:10:25.042406 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-19 07:10:25.042413 | orchestrator | Friday 19 September 2025 07:09:41 +0000 (0:00:00.324) 0:05:48.663 ****** 2025-09-19 07:10:25.042419 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.042426 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.042433 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.042439 | orchestrator | 2025-09-19 07:10:25.042446 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-19 07:10:25.042453 | orchestrator | Friday 19 September 2025 07:09:42 +0000 (0:00:00.832) 0:05:49.495 ****** 2025-09-19 07:10:25.042459 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.042466 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.042472 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.042479 | orchestrator | 2025-09-19 07:10:25.042486 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-19 07:10:25.042492 | orchestrator | Friday 19 September 2025 07:09:42 +0000 (0:00:00.722) 0:05:50.217 ****** 2025-09-19 07:10:25.042499 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.042506 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.042512 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.042519 | orchestrator | 2025-09-19 07:10:25.042526 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-19 07:10:25.042537 | orchestrator | Friday 19 September 2025 07:09:43 +0000 (0:00:00.361) 0:05:50.579 ****** 2025-09-19 07:10:25.042544 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.042550 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.042557 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.042563 | orchestrator | 2025-09-19 07:10:25.042570 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-19 07:10:25.042577 | orchestrator | Friday 19 September 2025 07:09:44 +0000 (0:00:00.937) 0:05:51.517 ****** 2025-09-19 07:10:25.042583 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.042589 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.042595 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.042601 | orchestrator | 2025-09-19 07:10:25.042608 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-19 07:10:25.042614 | orchestrator | Friday 19 September 2025 07:09:45 +0000 (0:00:01.267) 0:05:52.785 ****** 2025-09-19 07:10:25.042620 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.042626 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.042632 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.042638 | orchestrator | 2025-09-19 07:10:25.042644 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-19 07:10:25.042650 | orchestrator | Friday 19 September 2025 07:09:46 +0000 (0:00:00.909) 0:05:53.694 ****** 2025-09-19 07:10:25.042657 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.042663 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.042669 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.042675 | orchestrator | 2025-09-19 07:10:25.042681 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-19 07:10:25.042691 | orchestrator | Friday 19 September 2025 07:09:55 +0000 (0:00:09.622) 0:06:03.317 ****** 2025-09-19 07:10:25.042697 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.042703 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.042709 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.042715 | orchestrator | 2025-09-19 07:10:25.042722 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-19 07:10:25.042728 | orchestrator | Friday 19 September 2025 07:09:56 +0000 (0:00:00.786) 0:06:04.104 ****** 2025-09-19 07:10:25.042734 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.042740 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.042746 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.042752 | orchestrator | 2025-09-19 07:10:25.042759 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-19 07:10:25.042765 | orchestrator | Friday 19 September 2025 07:10:05 +0000 (0:00:08.700) 0:06:12.805 ****** 2025-09-19 07:10:25.042771 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.042777 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.042783 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.042790 | orchestrator | 2025-09-19 07:10:25.042796 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-19 07:10:25.042802 | orchestrator | Friday 19 September 2025 07:10:09 +0000 (0:00:04.360) 0:06:17.165 ****** 2025-09-19 07:10:25.042808 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:10:25.042814 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:10:25.042820 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:10:25.042826 | orchestrator | 2025-09-19 07:10:25.042833 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-19 07:10:25.042839 | orchestrator | Friday 19 September 2025 07:10:18 +0000 (0:00:09.227) 0:06:26.392 ****** 2025-09-19 07:10:25.042845 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.042851 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.042857 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.042863 | orchestrator | 2025-09-19 07:10:25.042870 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-19 07:10:25.042876 | orchestrator | Friday 19 September 2025 07:10:19 +0000 (0:00:00.360) 0:06:26.753 ****** 2025-09-19 07:10:25.042888 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.042897 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.042903 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.042910 | orchestrator | 2025-09-19 07:10:25.042916 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-19 07:10:25.042922 | orchestrator | Friday 19 September 2025 07:10:19 +0000 (0:00:00.354) 0:06:27.107 ****** 2025-09-19 07:10:25.042928 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.042934 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.042941 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.042947 | orchestrator | 2025-09-19 07:10:25.042953 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-19 07:10:25.042959 | orchestrator | Friday 19 September 2025 07:10:20 +0000 (0:00:00.706) 0:06:27.813 ****** 2025-09-19 07:10:25.042965 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.042985 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.042991 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.042997 | orchestrator | 2025-09-19 07:10:25.043003 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-19 07:10:25.043010 | orchestrator | Friday 19 September 2025 07:10:20 +0000 (0:00:00.386) 0:06:28.200 ****** 2025-09-19 07:10:25.043016 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.043022 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.043028 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.043035 | orchestrator | 2025-09-19 07:10:25.043041 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-19 07:10:25.043047 | orchestrator | Friday 19 September 2025 07:10:21 +0000 (0:00:00.379) 0:06:28.579 ****** 2025-09-19 07:10:25.043053 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:10:25.043060 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:10:25.043066 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:10:25.043072 | orchestrator | 2025-09-19 07:10:25.043078 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-19 07:10:25.043084 | orchestrator | Friday 19 September 2025 07:10:21 +0000 (0:00:00.363) 0:06:28.943 ****** 2025-09-19 07:10:25.043091 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.043097 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.043103 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.043109 | orchestrator | 2025-09-19 07:10:25.043115 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-19 07:10:25.043121 | orchestrator | Friday 19 September 2025 07:10:22 +0000 (0:00:01.321) 0:06:30.264 ****** 2025-09-19 07:10:25.043128 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:10:25.043134 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:10:25.043140 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:10:25.043146 | orchestrator | 2025-09-19 07:10:25.043152 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:10:25.043158 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 07:10:25.043165 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 07:10:25.043171 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 07:10:25.043177 | orchestrator | 2025-09-19 07:10:25.043183 | orchestrator | 2025-09-19 07:10:25.043190 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:10:25.043196 | orchestrator | Friday 19 September 2025 07:10:23 +0000 (0:00:00.854) 0:06:31.118 ****** 2025-09-19 07:10:25.043202 | orchestrator | =============================================================================== 2025-09-19 07:10:25.043208 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.62s 2025-09-19 07:10:25.043224 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.23s 2025-09-19 07:10:25.043230 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.70s 2025-09-19 07:10:25.043237 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 7.54s 2025-09-19 07:10:25.043243 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.66s 2025-09-19 07:10:25.043249 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.65s 2025-09-19 07:10:25.043255 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.58s 2025-09-19 07:10:25.043261 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.96s 2025-09-19 07:10:25.043267 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.67s 2025-09-19 07:10:25.043273 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.55s 2025-09-19 07:10:25.043280 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.43s 2025-09-19 07:10:25.043286 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.36s 2025-09-19 07:10:25.043292 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.33s 2025-09-19 07:10:25.043298 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.33s 2025-09-19 07:10:25.043304 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.31s 2025-09-19 07:10:25.043310 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.27s 2025-09-19 07:10:25.043317 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.17s 2025-09-19 07:10:25.043323 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.12s 2025-09-19 07:10:25.043329 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.92s 2025-09-19 07:10:25.043335 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.81s 2025-09-19 07:10:28.062800 | orchestrator | 2025-09-19 07:10:28 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:10:28.064900 | orchestrator | 2025-09-19 07:10:28 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:28.065744 | orchestrator | 2025-09-19 07:10:28 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:10:28.066112 | orchestrator | 2025-09-19 07:10:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:31.097591 | orchestrator | 2025-09-19 07:10:31 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:10:31.098246 | orchestrator | 2025-09-19 07:10:31 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:31.100283 | orchestrator | 2025-09-19 07:10:31 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:10:31.100336 | orchestrator | 2025-09-19 07:10:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:34.144139 | orchestrator | 2025-09-19 07:10:34 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:10:34.144489 | orchestrator | 2025-09-19 07:10:34 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:34.146525 | orchestrator | 2025-09-19 07:10:34 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:10:34.146559 | orchestrator | 2025-09-19 07:10:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:37.235034 | orchestrator | 2025-09-19 07:10:37 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:10:37.237743 | orchestrator | 2025-09-19 07:10:37 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:37.240282 | orchestrator | 2025-09-19 07:10:37 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:10:37.240309 | orchestrator | 2025-09-19 07:10:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:40.276538 | orchestrator | 2025-09-19 07:10:40 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:10:40.276632 | orchestrator | 2025-09-19 07:10:40 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:40.277402 | orchestrator | 2025-09-19 07:10:40 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:10:40.277427 | orchestrator | 2025-09-19 07:10:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:43.333128 | orchestrator | 2025-09-19 07:10:43 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:10:43.333229 | orchestrator | 2025-09-19 07:10:43 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:43.333244 | orchestrator | 2025-09-19 07:10:43 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:10:43.333277 | orchestrator | 2025-09-19 07:10:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:46.356253 | orchestrator | 2025-09-19 07:10:46 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:10:46.359054 | orchestrator | 2025-09-19 07:10:46 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:46.359765 | orchestrator | 2025-09-19 07:10:46 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:10:46.359797 | orchestrator | 2025-09-19 07:10:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:49.481188 | orchestrator | 2025-09-19 07:10:49 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:10:49.481606 | orchestrator | 2025-09-19 07:10:49 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:49.482502 | orchestrator | 2025-09-19 07:10:49 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:10:49.482535 | orchestrator | 2025-09-19 07:10:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:52.519189 | orchestrator | 2025-09-19 07:10:52 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:10:52.520340 | orchestrator | 2025-09-19 07:10:52 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:52.521200 | orchestrator | 2025-09-19 07:10:52 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:10:52.521396 | orchestrator | 2025-09-19 07:10:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:55.567202 | orchestrator | 2025-09-19 07:10:55 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:10:55.568025 | orchestrator | 2025-09-19 07:10:55 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:55.569591 | orchestrator | 2025-09-19 07:10:55 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:10:55.570412 | orchestrator | 2025-09-19 07:10:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:10:58.618310 | orchestrator | 2025-09-19 07:10:58 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:10:58.621671 | orchestrator | 2025-09-19 07:10:58 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:10:58.623795 | orchestrator | 2025-09-19 07:10:58 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:10:58.624029 | orchestrator | 2025-09-19 07:10:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:01.680321 | orchestrator | 2025-09-19 07:11:01 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:01.682315 | orchestrator | 2025-09-19 07:11:01 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:01.685336 | orchestrator | 2025-09-19 07:11:01 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:01.685433 | orchestrator | 2025-09-19 07:11:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:04.745054 | orchestrator | 2025-09-19 07:11:04 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:04.748424 | orchestrator | 2025-09-19 07:11:04 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:04.753067 | orchestrator | 2025-09-19 07:11:04 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:04.753117 | orchestrator | 2025-09-19 07:11:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:07.798301 | orchestrator | 2025-09-19 07:11:07 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:07.800068 | orchestrator | 2025-09-19 07:11:07 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:07.802819 | orchestrator | 2025-09-19 07:11:07 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:07.802844 | orchestrator | 2025-09-19 07:11:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:10.849880 | orchestrator | 2025-09-19 07:11:10 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:10.850088 | orchestrator | 2025-09-19 07:11:10 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:10.851300 | orchestrator | 2025-09-19 07:11:10 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:10.851344 | orchestrator | 2025-09-19 07:11:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:13.895069 | orchestrator | 2025-09-19 07:11:13 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:13.896476 | orchestrator | 2025-09-19 07:11:13 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:13.898517 | orchestrator | 2025-09-19 07:11:13 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:13.898548 | orchestrator | 2025-09-19 07:11:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:16.944916 | orchestrator | 2025-09-19 07:11:16 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:16.947202 | orchestrator | 2025-09-19 07:11:16 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:16.949405 | orchestrator | 2025-09-19 07:11:16 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:16.949483 | orchestrator | 2025-09-19 07:11:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:19.994328 | orchestrator | 2025-09-19 07:11:19 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:19.994424 | orchestrator | 2025-09-19 07:11:19 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:19.995000 | orchestrator | 2025-09-19 07:11:19 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:19.995060 | orchestrator | 2025-09-19 07:11:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:23.046650 | orchestrator | 2025-09-19 07:11:23 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:23.048446 | orchestrator | 2025-09-19 07:11:23 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:23.050081 | orchestrator | 2025-09-19 07:11:23 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:23.050137 | orchestrator | 2025-09-19 07:11:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:26.103678 | orchestrator | 2025-09-19 07:11:26 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:26.105442 | orchestrator | 2025-09-19 07:11:26 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:26.107783 | orchestrator | 2025-09-19 07:11:26 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:26.107818 | orchestrator | 2025-09-19 07:11:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:29.155996 | orchestrator | 2025-09-19 07:11:29 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:29.157120 | orchestrator | 2025-09-19 07:11:29 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:29.158805 | orchestrator | 2025-09-19 07:11:29 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:29.159018 | orchestrator | 2025-09-19 07:11:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:32.203171 | orchestrator | 2025-09-19 07:11:32 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:32.205993 | orchestrator | 2025-09-19 07:11:32 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:32.212076 | orchestrator | 2025-09-19 07:11:32 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:32.212154 | orchestrator | 2025-09-19 07:11:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:35.262624 | orchestrator | 2025-09-19 07:11:35 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:35.262726 | orchestrator | 2025-09-19 07:11:35 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:35.263661 | orchestrator | 2025-09-19 07:11:35 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:35.263688 | orchestrator | 2025-09-19 07:11:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:38.309717 | orchestrator | 2025-09-19 07:11:38 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:38.312415 | orchestrator | 2025-09-19 07:11:38 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:38.314421 | orchestrator | 2025-09-19 07:11:38 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:38.314455 | orchestrator | 2025-09-19 07:11:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:41.382668 | orchestrator | 2025-09-19 07:11:41 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:41.384804 | orchestrator | 2025-09-19 07:11:41 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:41.386946 | orchestrator | 2025-09-19 07:11:41 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:41.386985 | orchestrator | 2025-09-19 07:11:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:44.435561 | orchestrator | 2025-09-19 07:11:44 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:44.437058 | orchestrator | 2025-09-19 07:11:44 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:44.440133 | orchestrator | 2025-09-19 07:11:44 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:44.440168 | orchestrator | 2025-09-19 07:11:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:47.493171 | orchestrator | 2025-09-19 07:11:47 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:47.494330 | orchestrator | 2025-09-19 07:11:47 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:47.496045 | orchestrator | 2025-09-19 07:11:47 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:47.496495 | orchestrator | 2025-09-19 07:11:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:50.546764 | orchestrator | 2025-09-19 07:11:50 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:50.548798 | orchestrator | 2025-09-19 07:11:50 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:50.550586 | orchestrator | 2025-09-19 07:11:50 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:50.550923 | orchestrator | 2025-09-19 07:11:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:53.600174 | orchestrator | 2025-09-19 07:11:53 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:53.600777 | orchestrator | 2025-09-19 07:11:53 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:53.602649 | orchestrator | 2025-09-19 07:11:53 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:53.602679 | orchestrator | 2025-09-19 07:11:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:56.648936 | orchestrator | 2025-09-19 07:11:56 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:56.651051 | orchestrator | 2025-09-19 07:11:56 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:56.653589 | orchestrator | 2025-09-19 07:11:56 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:56.653635 | orchestrator | 2025-09-19 07:11:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:11:59.707701 | orchestrator | 2025-09-19 07:11:59 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:11:59.709136 | orchestrator | 2025-09-19 07:11:59 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:11:59.711244 | orchestrator | 2025-09-19 07:11:59 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:11:59.711272 | orchestrator | 2025-09-19 07:11:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:02.751608 | orchestrator | 2025-09-19 07:12:02 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:02.752666 | orchestrator | 2025-09-19 07:12:02 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:12:02.756387 | orchestrator | 2025-09-19 07:12:02 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:02.757055 | orchestrator | 2025-09-19 07:12:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:05.799784 | orchestrator | 2025-09-19 07:12:05 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:05.802380 | orchestrator | 2025-09-19 07:12:05 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:12:05.803972 | orchestrator | 2025-09-19 07:12:05 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:05.804204 | orchestrator | 2025-09-19 07:12:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:08.847484 | orchestrator | 2025-09-19 07:12:08 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:08.849372 | orchestrator | 2025-09-19 07:12:08 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:12:08.852762 | orchestrator | 2025-09-19 07:12:08 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:08.852817 | orchestrator | 2025-09-19 07:12:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:11.898745 | orchestrator | 2025-09-19 07:12:11 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:11.899607 | orchestrator | 2025-09-19 07:12:11 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:12:11.901452 | orchestrator | 2025-09-19 07:12:11 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:11.901491 | orchestrator | 2025-09-19 07:12:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:14.952379 | orchestrator | 2025-09-19 07:12:14 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:14.954506 | orchestrator | 2025-09-19 07:12:14 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:12:14.956771 | orchestrator | 2025-09-19 07:12:14 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:14.956815 | orchestrator | 2025-09-19 07:12:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:18.001404 | orchestrator | 2025-09-19 07:12:18 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:18.001947 | orchestrator | 2025-09-19 07:12:18 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:12:18.003240 | orchestrator | 2025-09-19 07:12:18 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:18.003280 | orchestrator | 2025-09-19 07:12:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:21.055249 | orchestrator | 2025-09-19 07:12:21 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:21.056938 | orchestrator | 2025-09-19 07:12:21 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:12:21.058830 | orchestrator | 2025-09-19 07:12:21 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:21.058896 | orchestrator | 2025-09-19 07:12:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:24.103604 | orchestrator | 2025-09-19 07:12:24 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:24.105503 | orchestrator | 2025-09-19 07:12:24 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:12:24.106668 | orchestrator | 2025-09-19 07:12:24 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:24.106698 | orchestrator | 2025-09-19 07:12:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:27.154770 | orchestrator | 2025-09-19 07:12:27 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:27.156242 | orchestrator | 2025-09-19 07:12:27 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:12:27.158175 | orchestrator | 2025-09-19 07:12:27 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:27.158204 | orchestrator | 2025-09-19 07:12:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:30.206731 | orchestrator | 2025-09-19 07:12:30 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:30.207624 | orchestrator | 2025-09-19 07:12:30 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:12:30.208952 | orchestrator | 2025-09-19 07:12:30 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:30.209234 | orchestrator | 2025-09-19 07:12:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:33.263044 | orchestrator | 2025-09-19 07:12:33 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:33.265263 | orchestrator | 2025-09-19 07:12:33 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:12:33.267722 | orchestrator | 2025-09-19 07:12:33 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:33.267984 | orchestrator | 2025-09-19 07:12:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:36.308910 | orchestrator | 2025-09-19 07:12:36 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:36.310331 | orchestrator | 2025-09-19 07:12:36 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:12:36.312609 | orchestrator | 2025-09-19 07:12:36 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:36.312740 | orchestrator | 2025-09-19 07:12:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:39.368522 | orchestrator | 2025-09-19 07:12:39 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:39.369494 | orchestrator | 2025-09-19 07:12:39 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state STARTED 2025-09-19 07:12:39.371640 | orchestrator | 2025-09-19 07:12:39 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:39.372474 | orchestrator | 2025-09-19 07:12:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:42.412587 | orchestrator | 2025-09-19 07:12:42 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:12:42.413679 | orchestrator | 2025-09-19 07:12:42 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:42.419124 | orchestrator | 2025-09-19 07:12:42 | INFO  | Task 6b68b430-4137-4704-b662-64a232198e73 is in state SUCCESS 2025-09-19 07:12:42.421394 | orchestrator | 2025-09-19 07:12:42.421456 | orchestrator | 2025-09-19 07:12:42.421473 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-19 07:12:42.421493 | orchestrator | 2025-09-19 07:12:42.421568 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-19 07:12:42.421661 | orchestrator | Friday 19 September 2025 07:01:27 +0000 (0:00:00.857) 0:00:00.857 ****** 2025-09-19 07:12:42.421760 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.421783 | orchestrator | 2025-09-19 07:12:42.421803 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-19 07:12:42.421820 | orchestrator | Friday 19 September 2025 07:01:28 +0000 (0:00:01.193) 0:00:02.050 ****** 2025-09-19 07:12:42.421839 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.421948 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.421968 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.421986 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.422004 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.422152 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.422177 | orchestrator | 2025-09-19 07:12:42.422197 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-19 07:12:42.422218 | orchestrator | Friday 19 September 2025 07:01:30 +0000 (0:00:01.842) 0:00:03.892 ****** 2025-09-19 07:12:42.422236 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.422252 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.422263 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.422273 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.422284 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.422294 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.422305 | orchestrator | 2025-09-19 07:12:42.422315 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-19 07:12:42.422326 | orchestrator | Friday 19 September 2025 07:01:31 +0000 (0:00:00.757) 0:00:04.650 ****** 2025-09-19 07:12:42.422414 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.422425 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.422436 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.422446 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.422457 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.422468 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.422478 | orchestrator | 2025-09-19 07:12:42.422489 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-19 07:12:42.422500 | orchestrator | Friday 19 September 2025 07:01:32 +0000 (0:00:00.912) 0:00:05.563 ****** 2025-09-19 07:12:42.422511 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.422521 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.422532 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.422542 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.422553 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.422563 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.422573 | orchestrator | 2025-09-19 07:12:42.422584 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-19 07:12:42.422595 | orchestrator | Friday 19 September 2025 07:01:33 +0000 (0:00:00.968) 0:00:06.531 ****** 2025-09-19 07:12:42.422605 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.422615 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.422626 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.422636 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.422689 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.422702 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.422713 | orchestrator | 2025-09-19 07:12:42.422724 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-19 07:12:42.422734 | orchestrator | Friday 19 September 2025 07:01:34 +0000 (0:00:00.907) 0:00:07.439 ****** 2025-09-19 07:12:42.422745 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.422756 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.422766 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.422777 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.422788 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.422798 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.422809 | orchestrator | 2025-09-19 07:12:42.422819 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-19 07:12:42.422831 | orchestrator | Friday 19 September 2025 07:01:35 +0000 (0:00:01.211) 0:00:08.650 ****** 2025-09-19 07:12:42.422906 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.422921 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.422932 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.422943 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.422953 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.422964 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.422975 | orchestrator | 2025-09-19 07:12:42.422986 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-19 07:12:42.423009 | orchestrator | Friday 19 September 2025 07:01:36 +0000 (0:00:01.170) 0:00:09.821 ****** 2025-09-19 07:12:42.423020 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.423031 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.423041 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.423052 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.423063 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.423073 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.423084 | orchestrator | 2025-09-19 07:12:42.423095 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-19 07:12:42.423106 | orchestrator | Friday 19 September 2025 07:01:37 +0000 (0:00:01.297) 0:00:11.118 ****** 2025-09-19 07:12:42.423159 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 07:12:42.423172 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:12:42.423183 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:12:42.423193 | orchestrator | 2025-09-19 07:12:42.423204 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-19 07:12:42.423215 | orchestrator | Friday 19 September 2025 07:01:38 +0000 (0:00:00.989) 0:00:12.108 ****** 2025-09-19 07:12:42.423261 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.423272 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.423283 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.423294 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.423304 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.423315 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.423325 | orchestrator | 2025-09-19 07:12:42.423351 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-19 07:12:42.423362 | orchestrator | Friday 19 September 2025 07:01:40 +0000 (0:00:01.367) 0:00:13.475 ****** 2025-09-19 07:12:42.423373 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 07:12:42.423383 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:12:42.423394 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:12:42.423405 | orchestrator | 2025-09-19 07:12:42.423415 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-19 07:12:42.423426 | orchestrator | Friday 19 September 2025 07:01:43 +0000 (0:00:03.203) 0:00:16.678 ****** 2025-09-19 07:12:42.423436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 07:12:42.423448 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 07:12:42.423458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 07:12:42.423469 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.423480 | orchestrator | 2025-09-19 07:12:42.423490 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-19 07:12:42.423501 | orchestrator | Friday 19 September 2025 07:01:44 +0000 (0:00:00.644) 0:00:17.323 ****** 2025-09-19 07:12:42.423513 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.423525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.423537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.423548 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.423592 | orchestrator | 2025-09-19 07:12:42.423603 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-19 07:12:42.423614 | orchestrator | Friday 19 September 2025 07:01:45 +0000 (0:00:00.886) 0:00:18.210 ****** 2025-09-19 07:12:42.423626 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.423646 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.423657 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.423700 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.423712 | orchestrator | 2025-09-19 07:12:42.423723 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-19 07:12:42.423733 | orchestrator | Friday 19 September 2025 07:01:45 +0000 (0:00:00.171) 0:00:18.382 ****** 2025-09-19 07:12:42.423756 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-19 07:01:41.400265', 'end': '2025-09-19 07:01:41.707398', 'delta': '0:00:00.307133', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.423866 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-19 07:01:42.283463', 'end': '2025-09-19 07:01:42.587144', 'delta': '0:00:00.303681', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.423880 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-19 07:01:43.023411', 'end': '2025-09-19 07:01:43.333004', 'delta': '0:00:00.309593', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.423899 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.423910 | orchestrator | 2025-09-19 07:12:42.423921 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-19 07:12:42.423932 | orchestrator | Friday 19 September 2025 07:01:45 +0000 (0:00:00.389) 0:00:18.771 ****** 2025-09-19 07:12:42.423943 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.423954 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.423964 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.423975 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.423985 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.423996 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.424007 | orchestrator | 2025-09-19 07:12:42.424018 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-19 07:12:42.424028 | orchestrator | Friday 19 September 2025 07:01:46 +0000 (0:00:01.411) 0:00:20.183 ****** 2025-09-19 07:12:42.424064 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:12:42.424075 | orchestrator | 2025-09-19 07:12:42.424086 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-19 07:12:42.424096 | orchestrator | Friday 19 September 2025 07:01:48 +0000 (0:00:01.494) 0:00:21.678 ****** 2025-09-19 07:12:42.424107 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.424118 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.424129 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.424166 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.424178 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.424189 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.424199 | orchestrator | 2025-09-19 07:12:42.424211 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-19 07:12:42.424221 | orchestrator | Friday 19 September 2025 07:01:49 +0000 (0:00:01.477) 0:00:23.156 ****** 2025-09-19 07:12:42.424232 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.424248 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.424259 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.424269 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.424280 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.424290 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.424301 | orchestrator | 2025-09-19 07:12:42.424312 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 07:12:42.424346 | orchestrator | Friday 19 September 2025 07:01:51 +0000 (0:00:01.383) 0:00:24.540 ****** 2025-09-19 07:12:42.424358 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.424368 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.424400 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.424412 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.424423 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.424433 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.424444 | orchestrator | 2025-09-19 07:12:42.424455 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-19 07:12:42.424466 | orchestrator | Friday 19 September 2025 07:01:52 +0000 (0:00:00.977) 0:00:25.517 ****** 2025-09-19 07:12:42.424476 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.424487 | orchestrator | 2025-09-19 07:12:42.424498 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-19 07:12:42.424551 | orchestrator | Friday 19 September 2025 07:01:52 +0000 (0:00:00.088) 0:00:25.605 ****** 2025-09-19 07:12:42.424563 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.424573 | orchestrator | 2025-09-19 07:12:42.424584 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 07:12:42.424595 | orchestrator | Friday 19 September 2025 07:01:52 +0000 (0:00:00.157) 0:00:25.763 ****** 2025-09-19 07:12:42.424606 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.424616 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.424637 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.424656 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.424680 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.424740 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.424761 | orchestrator | 2025-09-19 07:12:42.424789 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-19 07:12:42.424873 | orchestrator | Friday 19 September 2025 07:01:53 +0000 (0:00:00.600) 0:00:26.363 ****** 2025-09-19 07:12:42.424895 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.424980 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.425000 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.425011 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.425087 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.425099 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.425110 | orchestrator | 2025-09-19 07:12:42.425121 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-19 07:12:42.425132 | orchestrator | Friday 19 September 2025 07:01:53 +0000 (0:00:00.822) 0:00:27.186 ****** 2025-09-19 07:12:42.425143 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.425153 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.425164 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.425174 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.425185 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.425196 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.425206 | orchestrator | 2025-09-19 07:12:42.425217 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-19 07:12:42.425228 | orchestrator | Friday 19 September 2025 07:01:54 +0000 (0:00:00.505) 0:00:27.692 ****** 2025-09-19 07:12:42.425239 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.425249 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.425260 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.425270 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.425281 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.425292 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.425302 | orchestrator | 2025-09-19 07:12:42.425313 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-19 07:12:42.425324 | orchestrator | Friday 19 September 2025 07:01:55 +0000 (0:00:01.306) 0:00:28.998 ****** 2025-09-19 07:12:42.425335 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.425376 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.425389 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.425399 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.425410 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.425443 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.425463 | orchestrator | 2025-09-19 07:12:42.425482 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-19 07:12:42.425501 | orchestrator | Friday 19 September 2025 07:01:56 +0000 (0:00:01.007) 0:00:30.006 ****** 2025-09-19 07:12:42.425521 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.425602 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.425617 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.425628 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.425639 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.425649 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.425660 | orchestrator | 2025-09-19 07:12:42.425671 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-19 07:12:42.425682 | orchestrator | Friday 19 September 2025 07:01:57 +0000 (0:00:00.855) 0:00:30.862 ****** 2025-09-19 07:12:42.425693 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.425703 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.425714 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.425725 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.425746 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.425757 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.425767 | orchestrator | 2025-09-19 07:12:42.425834 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-19 07:12:42.425864 | orchestrator | Friday 19 September 2025 07:01:58 +0000 (0:00:00.727) 0:00:31.590 ****** 2025-09-19 07:12:42.425884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--deb73447--54c2--58c6--89f8--2e63b50c59b2-osd--block--deb73447--54c2--58c6--89f8--2e63b50c59b2', 'dm-uuid-LVM-XvI1wpi0mlo2hzhwRoH4K1fschEbbhdh2e5elcYuufXf341NnOftrw9hvbPcwhQa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.425898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1-osd--block--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1', 'dm-uuid-LVM-Su90vW0BEUeQSGmwjTSwOn77M0vIvaha3sCWB7PEjm1YojP1KMlkNMNjvR6S7zpe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.425919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.425965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.425980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.425991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part1', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part14', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part15', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part16', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--deb73447--54c2--58c6--89f8--2e63b50c59b2-osd--block--deb73447--54c2--58c6--89f8--2e63b50c59b2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-x0Ho3y-Pqsq-ac3I-beEO-ZTA1-pzuy-YRapkj', 'scsi-0QEMU_QEMU_HARDDISK_4dd49722-42e6-4e94-9106-a95d5116fdb0', 'scsi-SQEMU_QEMU_HARDDISK_4dd49722-42e6-4e94-9106-a95d5116fdb0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1-osd--block--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NEFu06-MPXX-Rh7R-idEq-pJyD-oFMN-0BLXas', 'scsi-0QEMU_QEMU_HARDDISK_1cf24504-b3f3-4e87-bda4-4a150d83b5cd', 'scsi-SQEMU_QEMU_HARDDISK_1cf24504-b3f3-4e87-bda4-4a150d83b5cd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b11ce89-f193-4587-acb9-80845fc85b80', 'scsi-SQEMU_QEMU_HARDDISK_5b11ce89-f193-4587-acb9-80845fc85b80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426243 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.426264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--05a06e17--0162--5722--bf4c--f18a4cab61c7-osd--block--05a06e17--0162--5722--bf4c--f18a4cab61c7', 'dm-uuid-LVM-9uk4YiTZadA2OsxZkkgZB77Y39lpzkYip18PAjava5s6U1lHF4Tvey4NloiLtVL2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--caff573e--485a--5d29--90dc--90eefd21fd68-osd--block--caff573e--485a--5d29--90dc--90eefd21fd68', 'dm-uuid-LVM-RxxKgkgukx9yiVevNTLc9qm1B1abF2Vhik61pg9cUadwL3fll230ZQ0WvBDc0kJ0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d4db71fd--07e0--550b--b185--dcfd36a5307b-osd--block--d4db71fd--07e0--550b--b185--dcfd36a5307b', 'dm-uuid-LVM-2VuNSydCF6xDPFVEK9I5XoXP18hgexhbgpWMv9SItC0xRgBVntLtQHtX3v4TsWZz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0c5dfb3--0a46--5f65--b869--b08108365918-osd--block--a0c5dfb3--0a46--5f65--b869--b08108365918', 'dm-uuid-LVM-kfNH1NnBEHxPOM95MFR51kfI2q2qlQpVOr4q5w24oDuwPe24et2QFHL52n2BudGm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426430 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--05a06e17--0162--5722--bf4c--f18a4cab61c7-osd--block--05a06e17--0162--5722--bf4c--f18a4cab61c7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-701MLa-Hvmr-zjJn-Mf5W-YNYL-f2gr-hokHBV', 'scsi-0QEMU_QEMU_HARDDISK_c93c054d-d324-48de-9f46-886df7842ff7', 'scsi-SQEMU_QEMU_HARDDISK_c93c054d-d324-48de-9f46-886df7842ff7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--caff573e--485a--5d29--90dc--90eefd21fd68-osd--block--caff573e--485a--5d29--90dc--90eefd21fd68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4VIc2w-sS1q-hgag-jCCK-DQqD-F8UU-9JzOKT', 'scsi-0QEMU_QEMU_HARDDISK_38f6fb83-908a-4dc2-a0dd-a3bb8d4e5dee', 'scsi-SQEMU_QEMU_HARDDISK_38f6fb83-908a-4dc2-a0dd-a3bb8d4e5dee'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b81412c7-c90d-434c-bce7-fcbaa76ae3c0', 'scsi-SQEMU_QEMU_HARDDISK_b81412c7-c90d-434c-bce7-fcbaa76ae3c0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426601 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.426612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d4db71fd--07e0--550b--b185--dcfd36a5307b-osd--block--d4db71fd--07e0--550b--b185--dcfd36a5307b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-e9kad0-VhmD-NuZg-oCqb-z4kM-k78m-t9RP2d', 'scsi-0QEMU_QEMU_HARDDISK_3567b0e7-c22b-4a61-9c89-3afd695b5400', 'scsi-SQEMU_QEMU_HARDDISK_3567b0e7-c22b-4a61-9c89-3afd695b5400'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a0c5dfb3--0a46--5f65--b869--b08108365918-osd--block--a0c5dfb3--0a46--5f65--b869--b08108365918'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-usR8JB-9mkA-PaYY-xtc5-7Wti-Y4N5-0AfeXV', 'scsi-0QEMU_QEMU_HARDDISK_60eaf991-1ab4-4753-9c6a-a15ff08d271c', 'scsi-SQEMU_QEMU_HARDDISK_60eaf991-1ab4-4753-9c6a-a15ff08d271c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efb009a3-4323-4607-93cb-907bed8bb1e3', 'scsi-SQEMU_QEMU_HARDDISK_efb009a3-4323-4607-93cb-907bed8bb1e3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426716 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.426727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192', 'scsi-SQEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part1', 'scsi-SQEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part14', 'scsi-SQEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part15', 'scsi-SQEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part16', 'scsi-SQEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.426904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426947 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.426963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.426985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.427006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4', 'scsi-SQEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.427024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.427036 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.427047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.427066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.427077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.427088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.427186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.427206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.427218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.427229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:12:42.427245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13', 'scsi-SQEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.427264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:12:42.427282 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.427293 | orchestrator | 2025-09-19 07:12:42.427304 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-19 07:12:42.427315 | orchestrator | Friday 19 September 2025 07:02:00 +0000 (0:00:02.285) 0:00:33.876 ****** 2025-09-19 07:12:42.427326 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--deb73447--54c2--58c6--89f8--2e63b50c59b2-osd--block--deb73447--54c2--58c6--89f8--2e63b50c59b2', 'dm-uuid-LVM-XvI1wpi0mlo2hzhwRoH4K1fschEbbhdh2e5elcYuufXf341NnOftrw9hvbPcwhQa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.427346 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1-osd--block--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1', 'dm-uuid-LVM-Su90vW0BEUeQSGmwjTSwOn77M0vIvaha3sCWB7PEjm1YojP1KMlkNMNjvR6S7zpe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.427367 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.427392 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.427411 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429673 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429685 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--05a06e17--0162--5722--bf4c--f18a4cab61c7-osd--block--05a06e17--0162--5722--bf4c--f18a4cab61c7', 'dm-uuid-LVM-9uk4YiTZadA2OsxZkkgZB77Y39lpzkYip18PAjava5s6U1lHF4Tvey4NloiLtVL2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429710 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429720 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--caff573e--485a--5d29--90dc--90eefd21fd68-osd--block--caff573e--485a--5d29--90dc--90eefd21fd68', 'dm-uuid-LVM-RxxKgkgukx9yiVevNTLc9qm1B1abF2Vhik61pg9cUadwL3fll230ZQ0WvBDc0kJ0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429748 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429759 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429770 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part1', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part14', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part15', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part16', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429782 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--deb73447--54c2--58c6--89f8--2e63b50c59b2-osd--block--deb73447--54c2--58c6--89f8--2e63b50c59b2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-x0Ho3y-Pqsq-ac3I-beEO-ZTA1-pzuy-YRapkj', 'scsi-0QEMU_QEMU_HARDDISK_4dd49722-42e6-4e94-9106-a95d5116fdb0', 'scsi-SQEMU_QEMU_HARDDISK_4dd49722-42e6-4e94-9106-a95d5116fdb0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429835 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d4db71fd--07e0--550b--b185--dcfd36a5307b-osd--block--d4db71fd--07e0--550b--b185--dcfd36a5307b', 'dm-uuid-LVM-2VuNSydCF6xDPFVEK9I5XoXP18hgexhbgpWMv9SItC0xRgBVntLtQHtX3v4TsWZz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429894 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0c5dfb3--0a46--5f65--b869--b08108365918-osd--block--a0c5dfb3--0a46--5f65--b869--b08108365918', 'dm-uuid-LVM-kfNH1NnBEHxPOM95MFR51kfI2q2qlQpVOr4q5w24oDuwPe24et2QFHL52n2BudGm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429909 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429920 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429936 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429952 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429963 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429973 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1-osd--block--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NEFu06-MPXX-Rh7R-idEq-pJyD-oFMN-0BLXas', 'scsi-0QEMU_QEMU_HARDDISK_1cf24504-b3f3-4e87-bda4-4a150d83b5cd', 'scsi-SQEMU_QEMU_HARDDISK_1cf24504-b3f3-4e87-bda4-4a150d83b5cd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.429983 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430063 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430086 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430104 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430114 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430131 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430156 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--05a06e17--0162--5722--bf4c--f18a4cab61c7-osd--block--05a06e17--0162--5722--bf4c--f18a4cab61c7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-701MLa-Hvmr-zjJn-Mf5W-YNYL-f2gr-hokHBV', 'scsi-0QEMU_QEMU_HARDDISK_c93c054d-d324-48de-9f46-886df7842ff7', 'scsi-SQEMU_QEMU_HARDDISK_c93c054d-d324-48de-9f46-886df7842ff7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430167 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b11ce89-f193-4587-acb9-80845fc85b80', 'scsi-SQEMU_QEMU_HARDDISK_5b11ce89-f193-4587-acb9-80845fc85b80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430177 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--caff573e--485a--5d29--90dc--90eefd21fd68-osd--block--caff573e--485a--5d29--90dc--90eefd21fd68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4VIc2w-sS1q-hgag-jCCK-DQqD-F8UU-9JzOKT', 'scsi-0QEMU_QEMU_HARDDISK_38f6fb83-908a-4dc2-a0dd-a3bb8d4e5dee', 'scsi-SQEMU_QEMU_HARDDISK_38f6fb83-908a-4dc2-a0dd-a3bb8d4e5dee'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430188 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430202 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430218 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b81412c7-c90d-434c-bce7-fcbaa76ae3c0', 'scsi-SQEMU_QEMU_HARDDISK_b81412c7-c90d-434c-bce7-fcbaa76ae3c0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430234 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430246 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430279 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430298 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430310 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d4db71fd--07e0--550b--b185--dcfd36a5307b-osd--block--d4db71fd--07e0--550b--b185--dcfd36a5307b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-e9kad0-VhmD-NuZg-oCqb-z4kM-k78m-t9RP2d', 'scsi-0QEMU_QEMU_HARDDISK_3567b0e7-c22b-4a61-9c89-3afd695b5400', 'scsi-SQEMU_QEMU_HARDDISK_3567b0e7-c22b-4a61-9c89-3afd695b5400'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430323 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a0c5dfb3--0a46--5f65--b869--b08108365918-osd--block--a0c5dfb3--0a46--5f65--b869--b08108365918'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-usR8JB-9mkA-PaYY-xtc5-7Wti-Y4N5-0AfeXV', 'scsi-0QEMU_QEMU_HARDDISK_60eaf991-1ab4-4753-9c6a-a15ff08d271c', 'scsi-SQEMU_QEMU_HARDDISK_60eaf991-1ab4-4753-9c6a-a15ff08d271c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430344 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efb009a3-4323-4607-93cb-907bed8bb1e3', 'scsi-SQEMU_QEMU_HARDDISK_efb009a3-4323-4607-93cb-907bed8bb1e3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430370 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430383 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430394 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430406 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430415 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430432 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430442 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430461 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430471 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430480 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.430495 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192', 'scsi-SQEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part1', 'scsi-SQEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part14', 'scsi-SQEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part15', 'scsi-SQEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part16', 'scsi-SQEMU_QEMU_HARDDISK_808b2dc9-0ff9-481c-981a-fd6b77cc5192-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430514 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430524 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430533 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430542 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430551 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430568 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430578 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430591 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430599 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.430607 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430615 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.430627 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4', 'scsi-SQEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b26d151-337e-426e-879e-20214fca4ff4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430640 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.430649 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430657 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.430669 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430678 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430699 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430715 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430726 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430735 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430748 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430756 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430782 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13', 'scsi-SQEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d60e80d-2e2e-4d25-a1fd-9a57154def13-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430796 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:12:42.430804 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.430812 | orchestrator | 2025-09-19 07:12:42.430821 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-19 07:12:42.430829 | orchestrator | Friday 19 September 2025 07:02:02 +0000 (0:00:01.873) 0:00:35.749 ****** 2025-09-19 07:12:42.430841 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.430863 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.430871 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.430879 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.430886 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.430894 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.430902 | orchestrator | 2025-09-19 07:12:42.430910 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-19 07:12:42.430918 | orchestrator | Friday 19 September 2025 07:02:04 +0000 (0:00:01.656) 0:00:37.406 ****** 2025-09-19 07:12:42.430925 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.430933 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.430941 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.430948 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.430956 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.430964 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.430971 | orchestrator | 2025-09-19 07:12:42.430979 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 07:12:42.430987 | orchestrator | Friday 19 September 2025 07:02:05 +0000 (0:00:00.999) 0:00:38.405 ****** 2025-09-19 07:12:42.430995 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.431003 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.431015 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.431022 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.431030 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.431038 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.431046 | orchestrator | 2025-09-19 07:12:42.431054 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 07:12:42.431062 | orchestrator | Friday 19 September 2025 07:02:06 +0000 (0:00:01.482) 0:00:39.888 ****** 2025-09-19 07:12:42.431069 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.431077 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.431085 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.431092 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.431100 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.431108 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.431115 | orchestrator | 2025-09-19 07:12:42.431123 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 07:12:42.431131 | orchestrator | Friday 19 September 2025 07:02:07 +0000 (0:00:00.745) 0:00:40.634 ****** 2025-09-19 07:12:42.431139 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.431146 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.431154 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.431162 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.431169 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.431177 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.431185 | orchestrator | 2025-09-19 07:12:42.431193 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 07:12:42.431201 | orchestrator | Friday 19 September 2025 07:02:08 +0000 (0:00:01.196) 0:00:41.830 ****** 2025-09-19 07:12:42.431208 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.431216 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.431224 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.431232 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.431239 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.431247 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.431255 | orchestrator | 2025-09-19 07:12:42.431263 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-19 07:12:42.431271 | orchestrator | Friday 19 September 2025 07:02:09 +0000 (0:00:00.958) 0:00:42.788 ****** 2025-09-19 07:12:42.431278 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-19 07:12:42.431287 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-19 07:12:42.431295 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-19 07:12:42.431302 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-19 07:12:42.431310 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-19 07:12:42.431318 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-19 07:12:42.431326 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-19 07:12:42.431337 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 07:12:42.431345 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-19 07:12:42.431353 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-19 07:12:42.431361 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-19 07:12:42.431368 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-19 07:12:42.431376 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-19 07:12:42.431384 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 07:12:42.431392 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-19 07:12:42.431399 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-19 07:12:42.431407 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-19 07:12:42.431415 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 07:12:42.431423 | orchestrator | 2025-09-19 07:12:42.431431 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-19 07:12:42.431443 | orchestrator | Friday 19 September 2025 07:02:13 +0000 (0:00:03.811) 0:00:46.600 ****** 2025-09-19 07:12:42.431451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 07:12:42.431459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 07:12:42.431466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 07:12:42.431474 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 07:12:42.431482 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 07:12:42.431490 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 07:12:42.431497 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.431505 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 07:12:42.431513 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 07:12:42.431521 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.431533 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 07:12:42.431541 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 07:12:42.431549 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.431557 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 07:12:42.431564 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 07:12:42.431572 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-19 07:12:42.431580 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.431587 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-19 07:12:42.431595 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-19 07:12:42.431603 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.431610 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-19 07:12:42.431618 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-19 07:12:42.431626 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-19 07:12:42.431633 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.431641 | orchestrator | 2025-09-19 07:12:42.431649 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-19 07:12:42.431657 | orchestrator | Friday 19 September 2025 07:02:14 +0000 (0:00:00.971) 0:00:47.571 ****** 2025-09-19 07:12:42.431665 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.431672 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.431680 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.431688 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.431696 | orchestrator | 2025-09-19 07:12:42.431704 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 07:12:42.431712 | orchestrator | Friday 19 September 2025 07:02:15 +0000 (0:00:01.612) 0:00:49.184 ****** 2025-09-19 07:12:42.431720 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.431727 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.431735 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.431743 | orchestrator | 2025-09-19 07:12:42.431750 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 07:12:42.431758 | orchestrator | Friday 19 September 2025 07:02:16 +0000 (0:00:00.437) 0:00:49.622 ****** 2025-09-19 07:12:42.431766 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.431774 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.431781 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.431789 | orchestrator | 2025-09-19 07:12:42.431797 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 07:12:42.431805 | orchestrator | Friday 19 September 2025 07:02:16 +0000 (0:00:00.409) 0:00:50.031 ****** 2025-09-19 07:12:42.431817 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.431825 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.431832 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.431840 | orchestrator | 2025-09-19 07:12:42.431861 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 07:12:42.431869 | orchestrator | Friday 19 September 2025 07:02:17 +0000 (0:00:00.459) 0:00:50.490 ****** 2025-09-19 07:12:42.431877 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.431885 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.431892 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.431900 | orchestrator | 2025-09-19 07:12:42.431908 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 07:12:42.431916 | orchestrator | Friday 19 September 2025 07:02:18 +0000 (0:00:00.965) 0:00:51.456 ****** 2025-09-19 07:12:42.431924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:42.431932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:42.431940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:42.431947 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.431955 | orchestrator | 2025-09-19 07:12:42.431966 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 07:12:42.431974 | orchestrator | Friday 19 September 2025 07:02:18 +0000 (0:00:00.731) 0:00:52.188 ****** 2025-09-19 07:12:42.431982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:42.431990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:42.431998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:42.432006 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.432013 | orchestrator | 2025-09-19 07:12:42.432021 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 07:12:42.432029 | orchestrator | Friday 19 September 2025 07:02:19 +0000 (0:00:00.524) 0:00:52.713 ****** 2025-09-19 07:12:42.432037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:42.432045 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:42.432053 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:42.432061 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.432068 | orchestrator | 2025-09-19 07:12:42.432076 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 07:12:42.432084 | orchestrator | Friday 19 September 2025 07:02:19 +0000 (0:00:00.341) 0:00:53.055 ****** 2025-09-19 07:12:42.432092 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.432099 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.432107 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.432115 | orchestrator | 2025-09-19 07:12:42.432123 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 07:12:42.432131 | orchestrator | Friday 19 September 2025 07:02:20 +0000 (0:00:00.339) 0:00:53.394 ****** 2025-09-19 07:12:42.432138 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 07:12:42.432146 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 07:12:42.432154 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 07:12:42.432162 | orchestrator | 2025-09-19 07:12:42.432173 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-19 07:12:42.432182 | orchestrator | Friday 19 September 2025 07:02:21 +0000 (0:00:00.818) 0:00:54.212 ****** 2025-09-19 07:12:42.432189 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 07:12:42.432197 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:12:42.432205 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:12:42.432213 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 07:12:42.432221 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 07:12:42.432234 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 07:12:42.432241 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 07:12:42.432249 | orchestrator | 2025-09-19 07:12:42.432257 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-19 07:12:42.432265 | orchestrator | Friday 19 September 2025 07:02:21 +0000 (0:00:00.726) 0:00:54.939 ****** 2025-09-19 07:12:42.432272 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 07:12:42.432280 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:12:42.432288 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:12:42.432295 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 07:12:42.432304 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 07:12:42.432311 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 07:12:42.432319 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 07:12:42.432327 | orchestrator | 2025-09-19 07:12:42.432335 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 07:12:42.432342 | orchestrator | Friday 19 September 2025 07:02:23 +0000 (0:00:01.875) 0:00:56.814 ****** 2025-09-19 07:12:42.432350 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.432358 | orchestrator | 2025-09-19 07:12:42.432366 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 07:12:42.432374 | orchestrator | Friday 19 September 2025 07:02:24 +0000 (0:00:01.185) 0:00:58.000 ****** 2025-09-19 07:12:42.432382 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.432390 | orchestrator | 2025-09-19 07:12:42.432397 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 07:12:42.432405 | orchestrator | Friday 19 September 2025 07:02:26 +0000 (0:00:01.554) 0:00:59.554 ****** 2025-09-19 07:12:42.432413 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.432421 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.432428 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.432436 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.432444 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.432451 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.432459 | orchestrator | 2025-09-19 07:12:42.432467 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 07:12:42.432475 | orchestrator | Friday 19 September 2025 07:02:28 +0000 (0:00:01.977) 0:01:01.532 ****** 2025-09-19 07:12:42.432486 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.432494 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.432501 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.432509 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.432517 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.432525 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.432532 | orchestrator | 2025-09-19 07:12:42.432540 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 07:12:42.432548 | orchestrator | Friday 19 September 2025 07:02:29 +0000 (0:00:01.381) 0:01:02.913 ****** 2025-09-19 07:12:42.432556 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.432563 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.432571 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.432579 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.432587 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.432598 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.432606 | orchestrator | 2025-09-19 07:12:42.432614 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 07:12:42.432622 | orchestrator | Friday 19 September 2025 07:02:31 +0000 (0:00:01.697) 0:01:04.611 ****** 2025-09-19 07:12:42.432629 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.432637 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.432645 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.432652 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.432660 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.432668 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.432676 | orchestrator | 2025-09-19 07:12:42.432683 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 07:12:42.432691 | orchestrator | Friday 19 September 2025 07:02:32 +0000 (0:00:00.927) 0:01:05.539 ****** 2025-09-19 07:12:42.432699 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.432707 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.432714 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.432722 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.432730 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.432737 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.432745 | orchestrator | 2025-09-19 07:12:42.432753 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 07:12:42.432765 | orchestrator | Friday 19 September 2025 07:02:33 +0000 (0:00:01.569) 0:01:07.108 ****** 2025-09-19 07:12:42.432772 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.432780 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.432788 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.432796 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.432804 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.432811 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.432819 | orchestrator | 2025-09-19 07:12:42.432827 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 07:12:42.432835 | orchestrator | Friday 19 September 2025 07:02:34 +0000 (0:00:00.655) 0:01:07.764 ****** 2025-09-19 07:12:42.432876 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.432886 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.432893 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.432901 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.432909 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.432916 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.432924 | orchestrator | 2025-09-19 07:12:42.432932 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 07:12:42.432940 | orchestrator | Friday 19 September 2025 07:02:35 +0000 (0:00:00.877) 0:01:08.641 ****** 2025-09-19 07:12:42.432947 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.432955 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.432963 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.432971 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.432978 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.432986 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.432994 | orchestrator | 2025-09-19 07:12:42.433001 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 07:12:42.433009 | orchestrator | Friday 19 September 2025 07:02:36 +0000 (0:00:01.143) 0:01:09.785 ****** 2025-09-19 07:12:42.433017 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.433025 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.433032 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.433040 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.433048 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.433054 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.433060 | orchestrator | 2025-09-19 07:12:42.433067 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 07:12:42.433074 | orchestrator | Friday 19 September 2025 07:02:37 +0000 (0:00:01.110) 0:01:10.895 ****** 2025-09-19 07:12:42.433085 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.433092 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.433098 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.433105 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.433111 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.433118 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.433125 | orchestrator | 2025-09-19 07:12:42.433131 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 07:12:42.433138 | orchestrator | Friday 19 September 2025 07:02:38 +0000 (0:00:00.996) 0:01:11.891 ****** 2025-09-19 07:12:42.433144 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.433151 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.433158 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.433164 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.433171 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.433177 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.433184 | orchestrator | 2025-09-19 07:12:42.433190 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 07:12:42.433197 | orchestrator | Friday 19 September 2025 07:02:39 +0000 (0:00:00.817) 0:01:12.709 ****** 2025-09-19 07:12:42.433204 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.433210 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.433217 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.433223 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.433230 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.433236 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.433243 | orchestrator | 2025-09-19 07:12:42.433250 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 07:12:42.433256 | orchestrator | Friday 19 September 2025 07:02:40 +0000 (0:00:01.101) 0:01:13.811 ****** 2025-09-19 07:12:42.433266 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.433273 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.433280 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.433286 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.433293 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.433299 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.433306 | orchestrator | 2025-09-19 07:12:42.433312 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 07:12:42.433319 | orchestrator | Friday 19 September 2025 07:02:41 +0000 (0:00:00.677) 0:01:14.489 ****** 2025-09-19 07:12:42.433326 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.433333 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.433339 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.433346 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.433352 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.433358 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.433365 | orchestrator | 2025-09-19 07:12:42.433372 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 07:12:42.433378 | orchestrator | Friday 19 September 2025 07:02:42 +0000 (0:00:01.121) 0:01:15.610 ****** 2025-09-19 07:12:42.433385 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.433391 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.433398 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.433404 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.433411 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.433418 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.433424 | orchestrator | 2025-09-19 07:12:42.433431 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 07:12:42.433438 | orchestrator | Friday 19 September 2025 07:02:43 +0000 (0:00:00.883) 0:01:16.493 ****** 2025-09-19 07:12:42.433444 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.433451 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.433457 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.433464 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.433474 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.433480 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.433487 | orchestrator | 2025-09-19 07:12:42.433498 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 07:12:42.433505 | orchestrator | Friday 19 September 2025 07:02:44 +0000 (0:00:00.945) 0:01:17.439 ****** 2025-09-19 07:12:42.433511 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.433518 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.433524 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.433531 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.433537 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.433544 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.433551 | orchestrator | 2025-09-19 07:12:42.433557 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 07:12:42.433564 | orchestrator | Friday 19 September 2025 07:02:44 +0000 (0:00:00.637) 0:01:18.076 ****** 2025-09-19 07:12:42.433570 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.433577 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.433584 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.433590 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.433596 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.433603 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.433609 | orchestrator | 2025-09-19 07:12:42.433616 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 07:12:42.433623 | orchestrator | Friday 19 September 2025 07:02:45 +0000 (0:00:00.869) 0:01:18.945 ****** 2025-09-19 07:12:42.433629 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.433636 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.433642 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.433649 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.433655 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.433662 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.433668 | orchestrator | 2025-09-19 07:12:42.433675 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-19 07:12:42.433681 | orchestrator | Friday 19 September 2025 07:02:46 +0000 (0:00:01.230) 0:01:20.176 ****** 2025-09-19 07:12:42.433688 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.433695 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.433701 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.433708 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.433714 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.433721 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.433727 | orchestrator | 2025-09-19 07:12:42.433734 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-19 07:12:42.433740 | orchestrator | Friday 19 September 2025 07:02:48 +0000 (0:00:01.473) 0:01:21.649 ****** 2025-09-19 07:12:42.433747 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.433753 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.433760 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.433766 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.433773 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.433779 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.433786 | orchestrator | 2025-09-19 07:12:42.433793 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-19 07:12:42.433799 | orchestrator | Friday 19 September 2025 07:02:50 +0000 (0:00:02.199) 0:01:23.849 ****** 2025-09-19 07:12:42.433806 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.433813 | orchestrator | 2025-09-19 07:12:42.433819 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-19 07:12:42.433826 | orchestrator | Friday 19 September 2025 07:02:51 +0000 (0:00:01.256) 0:01:25.106 ****** 2025-09-19 07:12:42.433832 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.433884 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.433891 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.433898 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.433904 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.433911 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.433918 | orchestrator | 2025-09-19 07:12:42.433924 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-19 07:12:42.433934 | orchestrator | Friday 19 September 2025 07:02:52 +0000 (0:00:00.635) 0:01:25.741 ****** 2025-09-19 07:12:42.433941 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.433947 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.433954 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.433961 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.433967 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.433974 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.433980 | orchestrator | 2025-09-19 07:12:42.433987 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-19 07:12:42.433993 | orchestrator | Friday 19 September 2025 07:02:53 +0000 (0:00:00.831) 0:01:26.573 ****** 2025-09-19 07:12:42.434000 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 07:12:42.434007 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 07:12:42.434013 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 07:12:42.434044 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 07:12:42.434051 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 07:12:42.434057 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 07:12:42.434064 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 07:12:42.434071 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 07:12:42.434077 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 07:12:42.434084 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 07:12:42.434091 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 07:12:42.434101 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 07:12:42.434108 | orchestrator | 2025-09-19 07:12:42.434115 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-19 07:12:42.434121 | orchestrator | Friday 19 September 2025 07:02:54 +0000 (0:00:01.282) 0:01:27.856 ****** 2025-09-19 07:12:42.434128 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.434134 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.434141 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.434147 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.434154 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.434160 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.434167 | orchestrator | 2025-09-19 07:12:42.434173 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-19 07:12:42.434180 | orchestrator | Friday 19 September 2025 07:02:55 +0000 (0:00:01.279) 0:01:29.135 ****** 2025-09-19 07:12:42.434187 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.434193 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.434200 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.434206 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.434212 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.434219 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.434225 | orchestrator | 2025-09-19 07:12:42.434232 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-19 07:12:42.434239 | orchestrator | Friday 19 September 2025 07:02:56 +0000 (0:00:00.629) 0:01:29.764 ****** 2025-09-19 07:12:42.434249 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.434256 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.434262 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.434269 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.434275 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.434282 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.434288 | orchestrator | 2025-09-19 07:12:42.434295 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-19 07:12:42.434301 | orchestrator | Friday 19 September 2025 07:02:57 +0000 (0:00:00.907) 0:01:30.672 ****** 2025-09-19 07:12:42.434308 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.434315 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.434321 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.434327 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.434334 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.434340 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.434347 | orchestrator | 2025-09-19 07:12:42.434354 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-19 07:12:42.434360 | orchestrator | Friday 19 September 2025 07:02:58 +0000 (0:00:00.615) 0:01:31.287 ****** 2025-09-19 07:12:42.434367 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.434374 | orchestrator | 2025-09-19 07:12:42.434380 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-19 07:12:42.434387 | orchestrator | Friday 19 September 2025 07:02:59 +0000 (0:00:01.245) 0:01:32.533 ****** 2025-09-19 07:12:42.434393 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.434400 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.434406 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.434413 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.434419 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.434426 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.434432 | orchestrator | 2025-09-19 07:12:42.434439 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-19 07:12:42.434446 | orchestrator | Friday 19 September 2025 07:03:51 +0000 (0:00:52.207) 0:02:24.740 ****** 2025-09-19 07:12:42.434452 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 07:12:42.434459 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 07:12:42.434468 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 07:12:42.434475 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.434481 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 07:12:42.434488 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 07:12:42.434494 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 07:12:42.434501 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.434507 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 07:12:42.434514 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 07:12:42.434521 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 07:12:42.434527 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.434533 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 07:12:42.434540 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 07:12:42.434546 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 07:12:42.434553 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.434564 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 07:12:42.434571 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 07:12:42.434578 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 07:12:42.434584 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.434591 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 07:12:42.434605 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 07:12:42.434612 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 07:12:42.434619 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.434626 | orchestrator | 2025-09-19 07:12:42.434632 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-19 07:12:42.434639 | orchestrator | Friday 19 September 2025 07:03:52 +0000 (0:00:00.774) 0:02:25.515 ****** 2025-09-19 07:12:42.434645 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.434652 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.434658 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.434665 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.434671 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.434678 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.434684 | orchestrator | 2025-09-19 07:12:42.434691 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-19 07:12:42.434708 | orchestrator | Friday 19 September 2025 07:03:53 +0000 (0:00:00.718) 0:02:26.233 ****** 2025-09-19 07:12:42.434715 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.434721 | orchestrator | 2025-09-19 07:12:42.434728 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-19 07:12:42.434735 | orchestrator | Friday 19 September 2025 07:03:53 +0000 (0:00:00.113) 0:02:26.347 ****** 2025-09-19 07:12:42.434741 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.434747 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.434754 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.434760 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.434767 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.434773 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.434780 | orchestrator | 2025-09-19 07:12:42.434786 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-19 07:12:42.434793 | orchestrator | Friday 19 September 2025 07:03:53 +0000 (0:00:00.553) 0:02:26.901 ****** 2025-09-19 07:12:42.434799 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.434806 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.434813 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.434819 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.434825 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.434832 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.434838 | orchestrator | 2025-09-19 07:12:42.434854 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-19 07:12:42.434861 | orchestrator | Friday 19 September 2025 07:03:54 +0000 (0:00:00.669) 0:02:27.570 ****** 2025-09-19 07:12:42.434868 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.434874 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.434881 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.434887 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.434894 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.434900 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.434907 | orchestrator | 2025-09-19 07:12:42.434913 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-19 07:12:42.434920 | orchestrator | Friday 19 September 2025 07:03:54 +0000 (0:00:00.576) 0:02:28.146 ****** 2025-09-19 07:12:42.434926 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.434933 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.434948 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.434961 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.434972 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.434983 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.434993 | orchestrator | 2025-09-19 07:12:42.435004 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-19 07:12:42.435014 | orchestrator | Friday 19 September 2025 07:03:57 +0000 (0:00:02.312) 0:02:30.459 ****** 2025-09-19 07:12:42.435024 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.435034 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.435045 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.435057 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.435069 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.435080 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.435091 | orchestrator | 2025-09-19 07:12:42.435104 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-19 07:12:42.435119 | orchestrator | Friday 19 September 2025 07:03:57 +0000 (0:00:00.596) 0:02:31.056 ****** 2025-09-19 07:12:42.435132 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.435140 | orchestrator | 2025-09-19 07:12:42.435147 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-19 07:12:42.435153 | orchestrator | Friday 19 September 2025 07:03:58 +0000 (0:00:00.958) 0:02:32.014 ****** 2025-09-19 07:12:42.435160 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.435166 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.435173 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.435179 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.435186 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.435192 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.435199 | orchestrator | 2025-09-19 07:12:42.435205 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-19 07:12:42.435212 | orchestrator | Friday 19 September 2025 07:03:59 +0000 (0:00:00.760) 0:02:32.775 ****** 2025-09-19 07:12:42.435219 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.435225 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.435232 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.435238 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.435245 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.435251 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.435257 | orchestrator | 2025-09-19 07:12:42.435264 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-19 07:12:42.435270 | orchestrator | Friday 19 September 2025 07:04:00 +0000 (0:00:00.613) 0:02:33.389 ****** 2025-09-19 07:12:42.435277 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.435283 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.435290 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.435297 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.435303 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.435316 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.435323 | orchestrator | 2025-09-19 07:12:42.435330 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-19 07:12:42.435336 | orchestrator | Friday 19 September 2025 07:04:00 +0000 (0:00:00.650) 0:02:34.039 ****** 2025-09-19 07:12:42.435343 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.435349 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.435356 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.435362 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.435369 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.435375 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.435382 | orchestrator | 2025-09-19 07:12:42.435388 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-19 07:12:42.435395 | orchestrator | Friday 19 September 2025 07:04:01 +0000 (0:00:00.950) 0:02:34.990 ****** 2025-09-19 07:12:42.435407 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.435414 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.435420 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.435427 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.435433 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.435439 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.435446 | orchestrator | 2025-09-19 07:12:42.435452 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-19 07:12:42.435459 | orchestrator | Friday 19 September 2025 07:04:02 +0000 (0:00:00.898) 0:02:35.888 ****** 2025-09-19 07:12:42.435466 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.435472 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.435479 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.435485 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.435491 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.435498 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.435504 | orchestrator | 2025-09-19 07:12:42.435511 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-19 07:12:42.435518 | orchestrator | Friday 19 September 2025 07:04:03 +0000 (0:00:01.028) 0:02:36.917 ****** 2025-09-19 07:12:42.435524 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.435531 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.435537 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.435543 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.435550 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.435556 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.435562 | orchestrator | 2025-09-19 07:12:42.435569 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-19 07:12:42.435576 | orchestrator | Friday 19 September 2025 07:04:04 +0000 (0:00:00.606) 0:02:37.524 ****** 2025-09-19 07:12:42.435582 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.435588 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.435595 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.435601 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.435608 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.435614 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.435620 | orchestrator | 2025-09-19 07:12:42.435627 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-19 07:12:42.435634 | orchestrator | Friday 19 September 2025 07:04:05 +0000 (0:00:00.754) 0:02:38.278 ****** 2025-09-19 07:12:42.435640 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.435647 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.435653 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.435660 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.435666 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.435673 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.435679 | orchestrator | 2025-09-19 07:12:42.435686 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-19 07:12:42.435693 | orchestrator | Friday 19 September 2025 07:04:06 +0000 (0:00:01.085) 0:02:39.377 ****** 2025-09-19 07:12:42.435699 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.435706 | orchestrator | 2025-09-19 07:12:42.435715 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-19 07:12:42.435722 | orchestrator | Friday 19 September 2025 07:04:07 +0000 (0:00:01.032) 0:02:40.410 ****** 2025-09-19 07:12:42.435729 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-19 07:12:42.435735 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-19 07:12:42.435742 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-19 07:12:42.435749 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-19 07:12:42.435759 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-19 07:12:42.435766 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-19 07:12:42.435772 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-19 07:12:42.435779 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-19 07:12:42.435785 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-19 07:12:42.435792 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-19 07:12:42.435799 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-19 07:12:42.435805 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-19 07:12:42.435812 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-19 07:12:42.435818 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-19 07:12:42.435825 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-19 07:12:42.435831 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-19 07:12:42.435838 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-19 07:12:42.435884 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-19 07:12:42.435893 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-19 07:12:42.435899 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-19 07:12:42.435910 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-19 07:12:42.435917 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-19 07:12:42.435924 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-19 07:12:42.435931 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-19 07:12:42.435938 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-19 07:12:42.435944 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-19 07:12:42.435951 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-19 07:12:42.435958 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-19 07:12:42.435965 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-19 07:12:42.435971 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-19 07:12:42.435978 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-19 07:12:42.435985 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-19 07:12:42.435991 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-19 07:12:42.436001 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-19 07:12:42.436012 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-19 07:12:42.436025 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-19 07:12:42.436036 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-19 07:12:42.436047 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-19 07:12:42.436058 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-19 07:12:42.436068 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-19 07:12:42.436079 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-19 07:12:42.436089 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-19 07:12:42.436100 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-19 07:12:42.436110 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-19 07:12:42.436120 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-19 07:12:42.436131 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-19 07:12:42.436142 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 07:12:42.436153 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 07:12:42.436173 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-19 07:12:42.436184 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 07:12:42.436197 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-19 07:12:42.436208 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 07:12:42.436219 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 07:12:42.436229 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 07:12:42.436241 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 07:12:42.436252 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 07:12:42.436263 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 07:12:42.436275 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 07:12:42.436287 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 07:12:42.436302 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 07:12:42.436309 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 07:12:42.436315 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 07:12:42.436321 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 07:12:42.436327 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 07:12:42.436333 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 07:12:42.436339 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 07:12:42.436345 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 07:12:42.436351 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 07:12:42.436357 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 07:12:42.436363 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 07:12:42.436369 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 07:12:42.436375 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 07:12:42.436381 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 07:12:42.436388 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 07:12:42.436394 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 07:12:42.436400 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 07:12:42.436406 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 07:12:42.436412 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 07:12:42.436423 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 07:12:42.436430 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 07:12:42.436436 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 07:12:42.436442 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-19 07:12:42.436448 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 07:12:42.436454 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-19 07:12:42.436460 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-19 07:12:42.436466 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 07:12:42.436473 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 07:12:42.436484 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-19 07:12:42.436490 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-19 07:12:42.436497 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-19 07:12:42.436503 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-19 07:12:42.436509 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-19 07:12:42.436515 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-19 07:12:42.436521 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-19 07:12:42.436527 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-19 07:12:42.436533 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-19 07:12:42.436539 | orchestrator | 2025-09-19 07:12:42.436545 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-19 07:12:42.436552 | orchestrator | Friday 19 September 2025 07:04:13 +0000 (0:00:06.306) 0:02:46.716 ****** 2025-09-19 07:12:42.436558 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.436564 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.436570 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.436577 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.436583 | orchestrator | 2025-09-19 07:12:42.436589 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-19 07:12:42.436595 | orchestrator | Friday 19 September 2025 07:04:14 +0000 (0:00:01.143) 0:02:47.860 ****** 2025-09-19 07:12:42.436601 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.436608 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.436614 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.436620 | orchestrator | 2025-09-19 07:12:42.436626 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-19 07:12:42.436632 | orchestrator | Friday 19 September 2025 07:04:15 +0000 (0:00:00.922) 0:02:48.782 ****** 2025-09-19 07:12:42.436638 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.436644 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.436653 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.436659 | orchestrator | 2025-09-19 07:12:42.436665 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-19 07:12:42.436671 | orchestrator | Friday 19 September 2025 07:04:17 +0000 (0:00:01.464) 0:02:50.247 ****** 2025-09-19 07:12:42.436678 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.436684 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.436690 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.436696 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.436702 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.436707 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.436714 | orchestrator | 2025-09-19 07:12:42.436720 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-19 07:12:42.436726 | orchestrator | Friday 19 September 2025 07:04:17 +0000 (0:00:00.613) 0:02:50.860 ****** 2025-09-19 07:12:42.436732 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.436738 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.436744 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.436750 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.436759 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.436766 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.436772 | orchestrator | 2025-09-19 07:12:42.436778 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-19 07:12:42.436784 | orchestrator | Friday 19 September 2025 07:04:18 +0000 (0:00:00.716) 0:02:51.576 ****** 2025-09-19 07:12:42.436790 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.436796 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.436802 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.436808 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.436814 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.436820 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.436826 | orchestrator | 2025-09-19 07:12:42.436832 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-19 07:12:42.436838 | orchestrator | Friday 19 September 2025 07:04:18 +0000 (0:00:00.588) 0:02:52.165 ****** 2025-09-19 07:12:42.436861 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.436868 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.436874 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.436880 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.436886 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.436892 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.436898 | orchestrator | 2025-09-19 07:12:42.436904 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-19 07:12:42.436910 | orchestrator | Friday 19 September 2025 07:04:19 +0000 (0:00:00.531) 0:02:52.696 ****** 2025-09-19 07:12:42.436916 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.436922 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.436928 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.436934 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.436940 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.436947 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.436953 | orchestrator | 2025-09-19 07:12:42.436959 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-19 07:12:42.436965 | orchestrator | Friday 19 September 2025 07:04:20 +0000 (0:00:00.829) 0:02:53.526 ****** 2025-09-19 07:12:42.436971 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.436977 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.436983 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.436989 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.436995 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437001 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437007 | orchestrator | 2025-09-19 07:12:42.437013 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-19 07:12:42.437019 | orchestrator | Friday 19 September 2025 07:04:21 +0000 (0:00:01.170) 0:02:54.696 ****** 2025-09-19 07:12:42.437025 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.437031 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.437037 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.437043 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.437049 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437055 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437061 | orchestrator | 2025-09-19 07:12:42.437068 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-19 07:12:42.437074 | orchestrator | Friday 19 September 2025 07:04:22 +0000 (0:00:01.188) 0:02:55.885 ****** 2025-09-19 07:12:42.437080 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.437086 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.437092 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.437098 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.437104 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437110 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437122 | orchestrator | 2025-09-19 07:12:42.437128 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-19 07:12:42.437135 | orchestrator | Friday 19 September 2025 07:04:23 +0000 (0:00:00.575) 0:02:56.460 ****** 2025-09-19 07:12:42.437141 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.437147 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437153 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437159 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.437165 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.437171 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.437177 | orchestrator | 2025-09-19 07:12:42.437183 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-19 07:12:42.437189 | orchestrator | Friday 19 September 2025 07:04:25 +0000 (0:00:02.695) 0:02:59.156 ****** 2025-09-19 07:12:42.437195 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.437201 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.437207 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.437213 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.437219 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437225 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437231 | orchestrator | 2025-09-19 07:12:42.437237 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-19 07:12:42.437246 | orchestrator | Friday 19 September 2025 07:04:26 +0000 (0:00:00.988) 0:03:00.144 ****** 2025-09-19 07:12:42.437252 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.437258 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.437264 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.437270 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.437276 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437282 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437288 | orchestrator | 2025-09-19 07:12:42.437294 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-19 07:12:42.437300 | orchestrator | Friday 19 September 2025 07:04:28 +0000 (0:00:01.067) 0:03:01.212 ****** 2025-09-19 07:12:42.437306 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.437312 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.437318 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.437324 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.437330 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437336 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437342 | orchestrator | 2025-09-19 07:12:42.437348 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-19 07:12:42.437354 | orchestrator | Friday 19 September 2025 07:04:28 +0000 (0:00:00.829) 0:03:02.041 ****** 2025-09-19 07:12:42.437361 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.437367 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.437373 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.437379 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.437385 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437391 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437397 | orchestrator | 2025-09-19 07:12:42.437406 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-19 07:12:42.437413 | orchestrator | Friday 19 September 2025 07:04:30 +0000 (0:00:01.209) 0:03:03.251 ****** 2025-09-19 07:12:42.437420 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-19 07:12:42.437432 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-19 07:12:42.437439 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-19 07:12:42.437445 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.437451 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-19 07:12:42.437458 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-19 07:12:42.437464 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-19 07:12:42.437470 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.437476 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.437482 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.437488 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437494 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437500 | orchestrator | 2025-09-19 07:12:42.437506 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-19 07:12:42.437512 | orchestrator | Friday 19 September 2025 07:04:30 +0000 (0:00:00.738) 0:03:03.989 ****** 2025-09-19 07:12:42.437518 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.437524 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.437530 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.437536 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.437542 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437548 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437554 | orchestrator | 2025-09-19 07:12:42.437563 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-19 07:12:42.437569 | orchestrator | Friday 19 September 2025 07:04:31 +0000 (0:00:00.670) 0:03:04.660 ****** 2025-09-19 07:12:42.437575 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.437581 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.437587 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.437593 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.437599 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437605 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437611 | orchestrator | 2025-09-19 07:12:42.437617 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 07:12:42.437623 | orchestrator | Friday 19 September 2025 07:04:32 +0000 (0:00:00.568) 0:03:05.228 ****** 2025-09-19 07:12:42.437629 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.437635 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.437641 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.437647 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.437653 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437663 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437669 | orchestrator | 2025-09-19 07:12:42.437675 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 07:12:42.437681 | orchestrator | Friday 19 September 2025 07:04:32 +0000 (0:00:00.952) 0:03:06.181 ****** 2025-09-19 07:12:42.437687 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.437693 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.437699 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.437705 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.437711 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437717 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437723 | orchestrator | 2025-09-19 07:12:42.437729 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 07:12:42.437735 | orchestrator | Friday 19 September 2025 07:04:34 +0000 (0:00:01.147) 0:03:07.328 ****** 2025-09-19 07:12:42.437741 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.437750 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.437757 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.437763 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.437769 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437775 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437780 | orchestrator | 2025-09-19 07:12:42.437787 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 07:12:42.437793 | orchestrator | Friday 19 September 2025 07:04:34 +0000 (0:00:00.709) 0:03:08.038 ****** 2025-09-19 07:12:42.437799 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.437805 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.437811 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.437817 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.437823 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.437829 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.437835 | orchestrator | 2025-09-19 07:12:42.437841 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 07:12:42.437859 | orchestrator | Friday 19 September 2025 07:04:35 +0000 (0:00:00.972) 0:03:09.011 ****** 2025-09-19 07:12:42.437866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:42.437872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:42.437878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:42.437884 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.437890 | orchestrator | 2025-09-19 07:12:42.437896 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 07:12:42.437902 | orchestrator | Friday 19 September 2025 07:04:36 +0000 (0:00:00.524) 0:03:09.535 ****** 2025-09-19 07:12:42.437908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:42.437914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:42.437920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:42.437926 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.437933 | orchestrator | 2025-09-19 07:12:42.437939 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 07:12:42.437945 | orchestrator | Friday 19 September 2025 07:04:36 +0000 (0:00:00.506) 0:03:10.042 ****** 2025-09-19 07:12:42.437951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:42.437957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:42.437963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:42.437969 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.437975 | orchestrator | 2025-09-19 07:12:42.437981 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 07:12:42.437987 | orchestrator | Friday 19 September 2025 07:04:37 +0000 (0:00:00.732) 0:03:10.774 ****** 2025-09-19 07:12:42.437993 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.438003 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.438009 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.438033 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.438040 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.438047 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.438053 | orchestrator | 2025-09-19 07:12:42.438059 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 07:12:42.438065 | orchestrator | Friday 19 September 2025 07:04:38 +0000 (0:00:00.541) 0:03:11.316 ****** 2025-09-19 07:12:42.438071 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 07:12:42.438077 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-19 07:12:42.438083 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.438089 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 07:12:42.438095 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 07:12:42.438101 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-19 07:12:42.438108 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.438113 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-19 07:12:42.438120 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.438126 | orchestrator | 2025-09-19 07:12:42.438142 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-19 07:12:42.438149 | orchestrator | Friday 19 September 2025 07:04:39 +0000 (0:00:01.867) 0:03:13.184 ****** 2025-09-19 07:12:42.438155 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.438161 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.438167 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.438173 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.438179 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.438185 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.438191 | orchestrator | 2025-09-19 07:12:42.438197 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 07:12:42.438203 | orchestrator | Friday 19 September 2025 07:04:42 +0000 (0:00:02.714) 0:03:15.899 ****** 2025-09-19 07:12:42.438209 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.438215 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.438221 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.438227 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.438233 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.438239 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.438245 | orchestrator | 2025-09-19 07:12:42.438251 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-19 07:12:42.438257 | orchestrator | Friday 19 September 2025 07:04:44 +0000 (0:00:01.524) 0:03:17.423 ****** 2025-09-19 07:12:42.438263 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.438269 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.438275 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.438282 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.438288 | orchestrator | 2025-09-19 07:12:42.438294 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-19 07:12:42.438300 | orchestrator | Friday 19 September 2025 07:04:45 +0000 (0:00:00.967) 0:03:18.391 ****** 2025-09-19 07:12:42.438306 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.438312 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.438318 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.438324 | orchestrator | 2025-09-19 07:12:42.438340 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-19 07:12:42.438346 | orchestrator | Friday 19 September 2025 07:04:45 +0000 (0:00:00.423) 0:03:18.815 ****** 2025-09-19 07:12:42.438353 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.438359 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.438364 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.438370 | orchestrator | 2025-09-19 07:12:42.438377 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-19 07:12:42.438388 | orchestrator | Friday 19 September 2025 07:04:46 +0000 (0:00:01.365) 0:03:20.180 ****** 2025-09-19 07:12:42.438394 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 07:12:42.438400 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 07:12:42.438406 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 07:12:42.438413 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.438419 | orchestrator | 2025-09-19 07:12:42.438425 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-19 07:12:42.438431 | orchestrator | Friday 19 September 2025 07:04:48 +0000 (0:00:01.041) 0:03:21.222 ****** 2025-09-19 07:12:42.438437 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.438443 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.438449 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.438455 | orchestrator | 2025-09-19 07:12:42.438461 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-19 07:12:42.438468 | orchestrator | Friday 19 September 2025 07:04:48 +0000 (0:00:00.419) 0:03:21.641 ****** 2025-09-19 07:12:42.438474 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.438480 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.438486 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.438492 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.438498 | orchestrator | 2025-09-19 07:12:42.438504 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-19 07:12:42.438510 | orchestrator | Friday 19 September 2025 07:04:49 +0000 (0:00:01.061) 0:03:22.703 ****** 2025-09-19 07:12:42.438516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:42.438523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:42.438529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:42.438535 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.438541 | orchestrator | 2025-09-19 07:12:42.438547 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-19 07:12:42.438553 | orchestrator | Friday 19 September 2025 07:04:49 +0000 (0:00:00.374) 0:03:23.077 ****** 2025-09-19 07:12:42.438560 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.438566 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.438572 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.438578 | orchestrator | 2025-09-19 07:12:42.438584 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-19 07:12:42.438590 | orchestrator | Friday 19 September 2025 07:04:50 +0000 (0:00:00.656) 0:03:23.734 ****** 2025-09-19 07:12:42.438596 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.438602 | orchestrator | 2025-09-19 07:12:42.438608 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-19 07:12:42.438615 | orchestrator | Friday 19 September 2025 07:04:50 +0000 (0:00:00.284) 0:03:24.019 ****** 2025-09-19 07:12:42.438621 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.438627 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.438633 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.438639 | orchestrator | 2025-09-19 07:12:42.438645 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-19 07:12:42.438651 | orchestrator | Friday 19 September 2025 07:04:51 +0000 (0:00:00.351) 0:03:24.371 ****** 2025-09-19 07:12:42.438657 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.438663 | orchestrator | 2025-09-19 07:12:42.438673 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-19 07:12:42.438679 | orchestrator | Friday 19 September 2025 07:04:51 +0000 (0:00:00.311) 0:03:24.683 ****** 2025-09-19 07:12:42.438685 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.438691 | orchestrator | 2025-09-19 07:12:42.438697 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-19 07:12:42.438707 | orchestrator | Friday 19 September 2025 07:04:51 +0000 (0:00:00.219) 0:03:24.902 ****** 2025-09-19 07:12:42.438713 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.438719 | orchestrator | 2025-09-19 07:12:42.438725 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-19 07:12:42.438732 | orchestrator | Friday 19 September 2025 07:04:51 +0000 (0:00:00.111) 0:03:25.014 ****** 2025-09-19 07:12:42.438738 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.438744 | orchestrator | 2025-09-19 07:12:42.438750 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-19 07:12:42.438756 | orchestrator | Friday 19 September 2025 07:04:52 +0000 (0:00:00.258) 0:03:25.273 ****** 2025-09-19 07:12:42.438762 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.438768 | orchestrator | 2025-09-19 07:12:42.438775 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-19 07:12:42.438781 | orchestrator | Friday 19 September 2025 07:04:52 +0000 (0:00:00.220) 0:03:25.493 ****** 2025-09-19 07:12:42.438787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:42.438793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:42.438799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:42.438805 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.438811 | orchestrator | 2025-09-19 07:12:42.438817 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-19 07:12:42.438824 | orchestrator | Friday 19 September 2025 07:04:52 +0000 (0:00:00.316) 0:03:25.809 ****** 2025-09-19 07:12:42.438830 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.438839 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.438856 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.438862 | orchestrator | 2025-09-19 07:12:42.438869 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-19 07:12:42.438875 | orchestrator | Friday 19 September 2025 07:04:53 +0000 (0:00:00.476) 0:03:26.285 ****** 2025-09-19 07:12:42.438881 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.438887 | orchestrator | 2025-09-19 07:12:42.438893 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-19 07:12:42.438899 | orchestrator | Friday 19 September 2025 07:04:53 +0000 (0:00:00.196) 0:03:26.482 ****** 2025-09-19 07:12:42.438905 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.438911 | orchestrator | 2025-09-19 07:12:42.438917 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-19 07:12:42.438923 | orchestrator | Friday 19 September 2025 07:04:53 +0000 (0:00:00.213) 0:03:26.695 ****** 2025-09-19 07:12:42.438929 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.438936 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.438942 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.438948 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.438954 | orchestrator | 2025-09-19 07:12:42.438960 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-19 07:12:42.438966 | orchestrator | Friday 19 September 2025 07:04:54 +0000 (0:00:00.619) 0:03:27.315 ****** 2025-09-19 07:12:42.438972 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.438978 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.438985 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.438991 | orchestrator | 2025-09-19 07:12:42.438998 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-19 07:12:42.439009 | orchestrator | Friday 19 September 2025 07:04:54 +0000 (0:00:00.419) 0:03:27.734 ****** 2025-09-19 07:12:42.439020 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.439031 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.439042 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.439051 | orchestrator | 2025-09-19 07:12:42.439062 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-19 07:12:42.439077 | orchestrator | Friday 19 September 2025 07:04:55 +0000 (0:00:01.256) 0:03:28.991 ****** 2025-09-19 07:12:42.439088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:42.439099 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:42.439108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:42.439119 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.439130 | orchestrator | 2025-09-19 07:12:42.439141 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-19 07:12:42.439148 | orchestrator | Friday 19 September 2025 07:04:56 +0000 (0:00:00.472) 0:03:29.463 ****** 2025-09-19 07:12:42.439155 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.439161 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.439167 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.439173 | orchestrator | 2025-09-19 07:12:42.439179 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-19 07:12:42.439185 | orchestrator | Friday 19 September 2025 07:04:56 +0000 (0:00:00.405) 0:03:29.869 ****** 2025-09-19 07:12:42.439191 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.439197 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.439203 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.439209 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.439215 | orchestrator | 2025-09-19 07:12:42.439221 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-19 07:12:42.439227 | orchestrator | Friday 19 September 2025 07:04:57 +0000 (0:00:00.918) 0:03:30.787 ****** 2025-09-19 07:12:42.439233 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.439239 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.439245 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.439251 | orchestrator | 2025-09-19 07:12:42.439261 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-19 07:12:42.439267 | orchestrator | Friday 19 September 2025 07:04:57 +0000 (0:00:00.287) 0:03:31.075 ****** 2025-09-19 07:12:42.439273 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.439279 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.439285 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.439291 | orchestrator | 2025-09-19 07:12:42.439297 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-19 07:12:42.439303 | orchestrator | Friday 19 September 2025 07:04:59 +0000 (0:00:01.270) 0:03:32.345 ****** 2025-09-19 07:12:42.439309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:42.439315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:42.439321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:42.439328 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.439334 | orchestrator | 2025-09-19 07:12:42.439340 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-19 07:12:42.439346 | orchestrator | Friday 19 September 2025 07:04:59 +0000 (0:00:00.618) 0:03:32.964 ****** 2025-09-19 07:12:42.439352 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.439358 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.439364 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.439370 | orchestrator | 2025-09-19 07:12:42.439376 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-19 07:12:42.439382 | orchestrator | Friday 19 September 2025 07:05:00 +0000 (0:00:00.391) 0:03:33.355 ****** 2025-09-19 07:12:42.439388 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.439394 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.439400 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.439406 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.439412 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.439418 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.439428 | orchestrator | 2025-09-19 07:12:42.439434 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-19 07:12:42.439445 | orchestrator | Friday 19 September 2025 07:05:01 +0000 (0:00:01.261) 0:03:34.617 ****** 2025-09-19 07:12:42.439452 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.439458 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.439464 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.439470 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.439476 | orchestrator | 2025-09-19 07:12:42.439482 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-19 07:12:42.439488 | orchestrator | Friday 19 September 2025 07:05:03 +0000 (0:00:01.915) 0:03:36.533 ****** 2025-09-19 07:12:42.439494 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.439500 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.439506 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.439512 | orchestrator | 2025-09-19 07:12:42.439518 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-19 07:12:42.439524 | orchestrator | Friday 19 September 2025 07:05:04 +0000 (0:00:00.730) 0:03:37.263 ****** 2025-09-19 07:12:42.439530 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.439536 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.439542 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.439548 | orchestrator | 2025-09-19 07:12:42.439555 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-19 07:12:42.439561 | orchestrator | Friday 19 September 2025 07:05:05 +0000 (0:00:01.733) 0:03:38.997 ****** 2025-09-19 07:12:42.439567 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 07:12:42.439573 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 07:12:42.439579 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 07:12:42.439585 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.439591 | orchestrator | 2025-09-19 07:12:42.439597 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-19 07:12:42.439603 | orchestrator | Friday 19 September 2025 07:05:06 +0000 (0:00:00.527) 0:03:39.524 ****** 2025-09-19 07:12:42.439609 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.439615 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.439621 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.439627 | orchestrator | 2025-09-19 07:12:42.439633 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-19 07:12:42.439639 | orchestrator | 2025-09-19 07:12:42.439645 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 07:12:42.439651 | orchestrator | Friday 19 September 2025 07:05:07 +0000 (0:00:00.741) 0:03:40.266 ****** 2025-09-19 07:12:42.439657 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.439663 | orchestrator | 2025-09-19 07:12:42.439669 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 07:12:42.439675 | orchestrator | Friday 19 September 2025 07:05:07 +0000 (0:00:00.784) 0:03:41.051 ****** 2025-09-19 07:12:42.439681 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.439687 | orchestrator | 2025-09-19 07:12:42.439693 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 07:12:42.439699 | orchestrator | Friday 19 September 2025 07:05:08 +0000 (0:00:00.591) 0:03:41.642 ****** 2025-09-19 07:12:42.439705 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.439711 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.439717 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.439723 | orchestrator | 2025-09-19 07:12:42.439729 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 07:12:42.439739 | orchestrator | Friday 19 September 2025 07:05:09 +0000 (0:00:00.824) 0:03:42.467 ****** 2025-09-19 07:12:42.439745 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.439751 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.439759 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.439766 | orchestrator | 2025-09-19 07:12:42.439772 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 07:12:42.439778 | orchestrator | Friday 19 September 2025 07:05:09 +0000 (0:00:00.406) 0:03:42.873 ****** 2025-09-19 07:12:42.439784 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.439790 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.439796 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.439802 | orchestrator | 2025-09-19 07:12:42.439808 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 07:12:42.439814 | orchestrator | Friday 19 September 2025 07:05:10 +0000 (0:00:00.503) 0:03:43.377 ****** 2025-09-19 07:12:42.439820 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.439826 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.439832 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.439838 | orchestrator | 2025-09-19 07:12:42.439877 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 07:12:42.439884 | orchestrator | Friday 19 September 2025 07:05:10 +0000 (0:00:00.305) 0:03:43.682 ****** 2025-09-19 07:12:42.439890 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.439896 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.439902 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.439908 | orchestrator | 2025-09-19 07:12:42.439914 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 07:12:42.439921 | orchestrator | Friday 19 September 2025 07:05:11 +0000 (0:00:00.798) 0:03:44.481 ****** 2025-09-19 07:12:42.439927 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.439933 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.439939 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.439945 | orchestrator | 2025-09-19 07:12:42.439951 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 07:12:42.439957 | orchestrator | Friday 19 September 2025 07:05:11 +0000 (0:00:00.313) 0:03:44.795 ****** 2025-09-19 07:12:42.439963 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.439969 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.439976 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.439982 | orchestrator | 2025-09-19 07:12:42.439992 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 07:12:42.439998 | orchestrator | Friday 19 September 2025 07:05:12 +0000 (0:00:00.539) 0:03:45.334 ****** 2025-09-19 07:12:42.440004 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.440010 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.440016 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.440022 | orchestrator | 2025-09-19 07:12:42.440027 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 07:12:42.440032 | orchestrator | Friday 19 September 2025 07:05:12 +0000 (0:00:00.734) 0:03:46.068 ****** 2025-09-19 07:12:42.440038 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.440043 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.440048 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.440053 | orchestrator | 2025-09-19 07:12:42.440059 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 07:12:42.440064 | orchestrator | Friday 19 September 2025 07:05:13 +0000 (0:00:00.888) 0:03:46.957 ****** 2025-09-19 07:12:42.440069 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.440075 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.440080 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.440085 | orchestrator | 2025-09-19 07:12:42.440091 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 07:12:42.440096 | orchestrator | Friday 19 September 2025 07:05:14 +0000 (0:00:00.300) 0:03:47.258 ****** 2025-09-19 07:12:42.440107 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.440113 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.440118 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.440123 | orchestrator | 2025-09-19 07:12:42.440129 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 07:12:42.440134 | orchestrator | Friday 19 September 2025 07:05:14 +0000 (0:00:00.680) 0:03:47.938 ****** 2025-09-19 07:12:42.440139 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.440144 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.440150 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.440155 | orchestrator | 2025-09-19 07:12:42.440160 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 07:12:42.440166 | orchestrator | Friday 19 September 2025 07:05:15 +0000 (0:00:00.399) 0:03:48.338 ****** 2025-09-19 07:12:42.440171 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.440176 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.440182 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.440187 | orchestrator | 2025-09-19 07:12:42.440192 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 07:12:42.440198 | orchestrator | Friday 19 September 2025 07:05:15 +0000 (0:00:00.458) 0:03:48.797 ****** 2025-09-19 07:12:42.440203 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.440208 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.440213 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.440219 | orchestrator | 2025-09-19 07:12:42.440224 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 07:12:42.440229 | orchestrator | Friday 19 September 2025 07:05:16 +0000 (0:00:00.640) 0:03:49.437 ****** 2025-09-19 07:12:42.440235 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.440240 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.440245 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.440250 | orchestrator | 2025-09-19 07:12:42.440256 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 07:12:42.440261 | orchestrator | Friday 19 September 2025 07:05:16 +0000 (0:00:00.748) 0:03:50.185 ****** 2025-09-19 07:12:42.440266 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.440271 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.440277 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.440282 | orchestrator | 2025-09-19 07:12:42.440287 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 07:12:42.440293 | orchestrator | Friday 19 September 2025 07:05:17 +0000 (0:00:00.387) 0:03:50.573 ****** 2025-09-19 07:12:42.440298 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.440303 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.440308 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.440314 | orchestrator | 2025-09-19 07:12:42.440322 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 07:12:42.440327 | orchestrator | Friday 19 September 2025 07:05:17 +0000 (0:00:00.445) 0:03:51.019 ****** 2025-09-19 07:12:42.440332 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.440338 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.440343 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.440348 | orchestrator | 2025-09-19 07:12:42.440354 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 07:12:42.440359 | orchestrator | Friday 19 September 2025 07:05:18 +0000 (0:00:00.496) 0:03:51.516 ****** 2025-09-19 07:12:42.440364 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.440369 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.440375 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.440380 | orchestrator | 2025-09-19 07:12:42.440385 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-19 07:12:42.440391 | orchestrator | Friday 19 September 2025 07:05:19 +0000 (0:00:00.923) 0:03:52.439 ****** 2025-09-19 07:12:42.440396 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.440401 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.440410 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.440415 | orchestrator | 2025-09-19 07:12:42.440421 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-19 07:12:42.440426 | orchestrator | Friday 19 September 2025 07:05:19 +0000 (0:00:00.300) 0:03:52.739 ****** 2025-09-19 07:12:42.440431 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.440437 | orchestrator | 2025-09-19 07:12:42.440442 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-19 07:12:42.440447 | orchestrator | Friday 19 September 2025 07:05:20 +0000 (0:00:00.522) 0:03:53.262 ****** 2025-09-19 07:12:42.440453 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.440458 | orchestrator | 2025-09-19 07:12:42.440463 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-19 07:12:42.440472 | orchestrator | Friday 19 September 2025 07:05:20 +0000 (0:00:00.348) 0:03:53.610 ****** 2025-09-19 07:12:42.440477 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-19 07:12:42.440483 | orchestrator | 2025-09-19 07:12:42.440488 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-19 07:12:42.440493 | orchestrator | Friday 19 September 2025 07:05:21 +0000 (0:00:00.813) 0:03:54.424 ****** 2025-09-19 07:12:42.440499 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.440504 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.440509 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.440514 | orchestrator | 2025-09-19 07:12:42.440520 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-19 07:12:42.440525 | orchestrator | Friday 19 September 2025 07:05:21 +0000 (0:00:00.312) 0:03:54.737 ****** 2025-09-19 07:12:42.440530 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.440536 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.440541 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.440546 | orchestrator | 2025-09-19 07:12:42.440552 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-19 07:12:42.440557 | orchestrator | Friday 19 September 2025 07:05:22 +0000 (0:00:00.458) 0:03:55.195 ****** 2025-09-19 07:12:42.440562 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.440568 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.440573 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.440578 | orchestrator | 2025-09-19 07:12:42.440583 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-19 07:12:42.440589 | orchestrator | Friday 19 September 2025 07:05:23 +0000 (0:00:01.190) 0:03:56.386 ****** 2025-09-19 07:12:42.440594 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.440599 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.440605 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.440610 | orchestrator | 2025-09-19 07:12:42.440615 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-19 07:12:42.440621 | orchestrator | Friday 19 September 2025 07:05:24 +0000 (0:00:00.966) 0:03:57.352 ****** 2025-09-19 07:12:42.440626 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.440631 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.440636 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.440642 | orchestrator | 2025-09-19 07:12:42.440647 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-19 07:12:42.440652 | orchestrator | Friday 19 September 2025 07:05:24 +0000 (0:00:00.684) 0:03:58.037 ****** 2025-09-19 07:12:42.440658 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.440663 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.440669 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.440674 | orchestrator | 2025-09-19 07:12:42.440679 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-19 07:12:42.440684 | orchestrator | Friday 19 September 2025 07:05:25 +0000 (0:00:00.644) 0:03:58.681 ****** 2025-09-19 07:12:42.440690 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.440698 | orchestrator | 2025-09-19 07:12:42.440703 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-19 07:12:42.440709 | orchestrator | Friday 19 September 2025 07:05:26 +0000 (0:00:01.387) 0:04:00.068 ****** 2025-09-19 07:12:42.440714 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.440719 | orchestrator | 2025-09-19 07:12:42.440725 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-19 07:12:42.440730 | orchestrator | Friday 19 September 2025 07:05:27 +0000 (0:00:00.758) 0:04:00.827 ****** 2025-09-19 07:12:42.440735 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:12:42.440741 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:42.440746 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:42.440751 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-19 07:12:42.440756 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:12:42.440762 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:12:42.440769 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:12:42.440775 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-19 07:12:42.440780 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-19 07:12:42.440785 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-19 07:12:42.440791 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:12:42.440796 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2025-09-19 07:12:42.440801 | orchestrator | 2025-09-19 07:12:42.440807 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-19 07:12:42.440812 | orchestrator | Friday 19 September 2025 07:05:31 +0000 (0:00:03.547) 0:04:04.375 ****** 2025-09-19 07:12:42.440817 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.440823 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.440828 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.440833 | orchestrator | 2025-09-19 07:12:42.440838 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-19 07:12:42.440852 | orchestrator | Friday 19 September 2025 07:05:32 +0000 (0:00:01.566) 0:04:05.942 ****** 2025-09-19 07:12:42.440857 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.440863 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.440868 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.440873 | orchestrator | 2025-09-19 07:12:42.440878 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-19 07:12:42.440884 | orchestrator | Friday 19 September 2025 07:05:33 +0000 (0:00:00.404) 0:04:06.346 ****** 2025-09-19 07:12:42.440889 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.440894 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.440900 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.440905 | orchestrator | 2025-09-19 07:12:42.440911 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-19 07:12:42.440916 | orchestrator | Friday 19 September 2025 07:05:33 +0000 (0:00:00.349) 0:04:06.696 ****** 2025-09-19 07:12:42.440921 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.440927 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.440932 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.440938 | orchestrator | 2025-09-19 07:12:42.440946 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-19 07:12:42.440952 | orchestrator | Friday 19 September 2025 07:05:35 +0000 (0:00:01.996) 0:04:08.693 ****** 2025-09-19 07:12:42.440957 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.440962 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.440967 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.440973 | orchestrator | 2025-09-19 07:12:42.440978 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-19 07:12:42.440983 | orchestrator | Friday 19 September 2025 07:05:37 +0000 (0:00:01.622) 0:04:10.315 ****** 2025-09-19 07:12:42.440993 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.440998 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.441003 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.441009 | orchestrator | 2025-09-19 07:12:42.441014 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-19 07:12:42.441019 | orchestrator | Friday 19 September 2025 07:05:37 +0000 (0:00:00.389) 0:04:10.705 ****** 2025-09-19 07:12:42.441025 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.441030 | orchestrator | 2025-09-19 07:12:42.441035 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-19 07:12:42.441040 | orchestrator | Friday 19 September 2025 07:05:38 +0000 (0:00:00.548) 0:04:11.253 ****** 2025-09-19 07:12:42.441046 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.441051 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.441056 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.441061 | orchestrator | 2025-09-19 07:12:42.441067 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-19 07:12:42.441072 | orchestrator | Friday 19 September 2025 07:05:38 +0000 (0:00:00.559) 0:04:11.813 ****** 2025-09-19 07:12:42.441077 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.441083 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.441088 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.441093 | orchestrator | 2025-09-19 07:12:42.441098 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-19 07:12:42.441104 | orchestrator | Friday 19 September 2025 07:05:38 +0000 (0:00:00.350) 0:04:12.163 ****** 2025-09-19 07:12:42.441109 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.441114 | orchestrator | 2025-09-19 07:12:42.441120 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-19 07:12:42.441125 | orchestrator | Friday 19 September 2025 07:05:39 +0000 (0:00:00.604) 0:04:12.768 ****** 2025-09-19 07:12:42.441130 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.441136 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.441141 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.441146 | orchestrator | 2025-09-19 07:12:42.441152 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-19 07:12:42.441157 | orchestrator | Friday 19 September 2025 07:05:41 +0000 (0:00:02.308) 0:04:15.076 ****** 2025-09-19 07:12:42.441162 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.441168 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.441173 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.441178 | orchestrator | 2025-09-19 07:12:42.441183 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-19 07:12:42.441189 | orchestrator | Friday 19 September 2025 07:05:43 +0000 (0:00:01.756) 0:04:16.833 ****** 2025-09-19 07:12:42.441194 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.441199 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.441204 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.441210 | orchestrator | 2025-09-19 07:12:42.441215 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-19 07:12:42.441220 | orchestrator | Friday 19 September 2025 07:05:45 +0000 (0:00:01.934) 0:04:18.768 ****** 2025-09-19 07:12:42.441225 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.441231 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.441238 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.441244 | orchestrator | 2025-09-19 07:12:42.441249 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-19 07:12:42.441254 | orchestrator | Friday 19 September 2025 07:05:48 +0000 (0:00:02.809) 0:04:21.578 ****** 2025-09-19 07:12:42.441260 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.441268 | orchestrator | 2025-09-19 07:12:42.441274 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-19 07:12:42.441279 | orchestrator | Friday 19 September 2025 07:05:49 +0000 (0:00:00.843) 0:04:22.421 ****** 2025-09-19 07:12:42.441284 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-19 07:12:42.441290 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.441295 | orchestrator | 2025-09-19 07:12:42.441300 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-19 07:12:42.441306 | orchestrator | Friday 19 September 2025 07:06:11 +0000 (0:00:21.806) 0:04:44.228 ****** 2025-09-19 07:12:42.441311 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.441316 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.441321 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.441327 | orchestrator | 2025-09-19 07:12:42.441332 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-19 07:12:42.441337 | orchestrator | Friday 19 September 2025 07:06:20 +0000 (0:00:09.276) 0:04:53.504 ****** 2025-09-19 07:12:42.441343 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.441348 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.441353 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.441358 | orchestrator | 2025-09-19 07:12:42.441364 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-19 07:12:42.441369 | orchestrator | Friday 19 September 2025 07:06:20 +0000 (0:00:00.380) 0:04:53.885 ****** 2025-09-19 07:12:42.441379 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9d33faa0befc39442963129123860c7b5cd8a01'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-19 07:12:42.441385 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9d33faa0befc39442963129123860c7b5cd8a01'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-19 07:12:42.441391 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9d33faa0befc39442963129123860c7b5cd8a01'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-19 07:12:42.441398 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9d33faa0befc39442963129123860c7b5cd8a01'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-19 07:12:42.441403 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9d33faa0befc39442963129123860c7b5cd8a01'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-19 07:12:42.441409 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f9d33faa0befc39442963129123860c7b5cd8a01'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__f9d33faa0befc39442963129123860c7b5cd8a01'}])  2025-09-19 07:12:42.441418 | orchestrator | 2025-09-19 07:12:42.441424 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 07:12:42.441429 | orchestrator | Friday 19 September 2025 07:06:34 +0000 (0:00:13.970) 0:05:07.856 ****** 2025-09-19 07:12:42.441434 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.441440 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.441445 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.441450 | orchestrator | 2025-09-19 07:12:42.441456 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-19 07:12:42.441463 | orchestrator | Friday 19 September 2025 07:06:35 +0000 (0:00:00.396) 0:05:08.253 ****** 2025-09-19 07:12:42.441469 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.441474 | orchestrator | 2025-09-19 07:12:42.441480 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-19 07:12:42.441485 | orchestrator | Friday 19 September 2025 07:06:35 +0000 (0:00:00.898) 0:05:09.151 ****** 2025-09-19 07:12:42.441490 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.441496 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.441501 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.441506 | orchestrator | 2025-09-19 07:12:42.441512 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-19 07:12:42.441517 | orchestrator | Friday 19 September 2025 07:06:36 +0000 (0:00:00.328) 0:05:09.480 ****** 2025-09-19 07:12:42.441522 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.441528 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.441533 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.441538 | orchestrator | 2025-09-19 07:12:42.441543 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-19 07:12:42.441549 | orchestrator | Friday 19 September 2025 07:06:36 +0000 (0:00:00.352) 0:05:09.833 ****** 2025-09-19 07:12:42.441554 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 07:12:42.441559 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 07:12:42.441565 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 07:12:42.441570 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.441575 | orchestrator | 2025-09-19 07:12:42.441580 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-19 07:12:42.441586 | orchestrator | Friday 19 September 2025 07:06:37 +0000 (0:00:00.579) 0:05:10.413 ****** 2025-09-19 07:12:42.441591 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.441597 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.441602 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.441607 | orchestrator | 2025-09-19 07:12:42.441615 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-19 07:12:42.441621 | orchestrator | 2025-09-19 07:12:42.441626 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 07:12:42.441632 | orchestrator | Friday 19 September 2025 07:06:38 +0000 (0:00:00.875) 0:05:11.288 ****** 2025-09-19 07:12:42.441637 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.441642 | orchestrator | 2025-09-19 07:12:42.441648 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 07:12:42.441653 | orchestrator | Friday 19 September 2025 07:06:38 +0000 (0:00:00.515) 0:05:11.803 ****** 2025-09-19 07:12:42.441658 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.441664 | orchestrator | 2025-09-19 07:12:42.441669 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 07:12:42.441674 | orchestrator | Friday 19 September 2025 07:06:39 +0000 (0:00:00.571) 0:05:12.375 ****** 2025-09-19 07:12:42.441680 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.441688 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.441693 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.441699 | orchestrator | 2025-09-19 07:12:42.441704 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 07:12:42.441709 | orchestrator | Friday 19 September 2025 07:06:40 +0000 (0:00:01.010) 0:05:13.385 ****** 2025-09-19 07:12:42.441715 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.441720 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.441725 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.441730 | orchestrator | 2025-09-19 07:12:42.441736 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 07:12:42.441741 | orchestrator | Friday 19 September 2025 07:06:40 +0000 (0:00:00.345) 0:05:13.731 ****** 2025-09-19 07:12:42.441746 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.441752 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.441757 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.441762 | orchestrator | 2025-09-19 07:12:42.441768 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 07:12:42.441773 | orchestrator | Friday 19 September 2025 07:06:40 +0000 (0:00:00.313) 0:05:14.044 ****** 2025-09-19 07:12:42.441778 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.441784 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.441789 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.441794 | orchestrator | 2025-09-19 07:12:42.441800 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 07:12:42.441805 | orchestrator | Friday 19 September 2025 07:06:41 +0000 (0:00:00.306) 0:05:14.351 ****** 2025-09-19 07:12:42.441810 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.441816 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.441821 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.441826 | orchestrator | 2025-09-19 07:12:42.441831 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 07:12:42.441837 | orchestrator | Friday 19 September 2025 07:06:42 +0000 (0:00:01.010) 0:05:15.361 ****** 2025-09-19 07:12:42.441851 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.441857 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.441862 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.441867 | orchestrator | 2025-09-19 07:12:42.441872 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 07:12:42.441878 | orchestrator | Friday 19 September 2025 07:06:42 +0000 (0:00:00.331) 0:05:15.693 ****** 2025-09-19 07:12:42.441883 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.441888 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.441894 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.441899 | orchestrator | 2025-09-19 07:12:42.441904 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 07:12:42.441910 | orchestrator | Friday 19 September 2025 07:06:42 +0000 (0:00:00.324) 0:05:16.018 ****** 2025-09-19 07:12:42.441915 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.441923 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.441928 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.441934 | orchestrator | 2025-09-19 07:12:42.441939 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 07:12:42.441944 | orchestrator | Friday 19 September 2025 07:06:43 +0000 (0:00:00.689) 0:05:16.707 ****** 2025-09-19 07:12:42.441950 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.441955 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.441960 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.441965 | orchestrator | 2025-09-19 07:12:42.441971 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 07:12:42.441976 | orchestrator | Friday 19 September 2025 07:06:44 +0000 (0:00:01.152) 0:05:17.860 ****** 2025-09-19 07:12:42.441982 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.441987 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.441992 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.442001 | orchestrator | 2025-09-19 07:12:42.442006 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 07:12:42.442012 | orchestrator | Friday 19 September 2025 07:06:44 +0000 (0:00:00.327) 0:05:18.188 ****** 2025-09-19 07:12:42.442082 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.442088 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.442093 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.442099 | orchestrator | 2025-09-19 07:12:42.442104 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 07:12:42.442110 | orchestrator | Friday 19 September 2025 07:06:45 +0000 (0:00:00.354) 0:05:18.542 ****** 2025-09-19 07:12:42.442115 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.442121 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.442126 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.442131 | orchestrator | 2025-09-19 07:12:42.442137 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 07:12:42.442142 | orchestrator | Friday 19 September 2025 07:06:45 +0000 (0:00:00.300) 0:05:18.843 ****** 2025-09-19 07:12:42.442148 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.442153 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.442177 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.442184 | orchestrator | 2025-09-19 07:12:42.442189 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 07:12:42.442194 | orchestrator | Friday 19 September 2025 07:06:46 +0000 (0:00:00.572) 0:05:19.416 ****** 2025-09-19 07:12:42.442200 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.442205 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.442210 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.442216 | orchestrator | 2025-09-19 07:12:42.442221 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 07:12:42.442226 | orchestrator | Friday 19 September 2025 07:06:46 +0000 (0:00:00.334) 0:05:19.751 ****** 2025-09-19 07:12:42.442231 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.442238 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.442247 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.442256 | orchestrator | 2025-09-19 07:12:42.442265 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 07:12:42.442273 | orchestrator | Friday 19 September 2025 07:06:46 +0000 (0:00:00.340) 0:05:20.091 ****** 2025-09-19 07:12:42.442281 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.442290 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.442298 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.442307 | orchestrator | 2025-09-19 07:12:42.442315 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 07:12:42.442323 | orchestrator | Friday 19 September 2025 07:06:47 +0000 (0:00:00.317) 0:05:20.409 ****** 2025-09-19 07:12:42.442332 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.442340 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.442347 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.442355 | orchestrator | 2025-09-19 07:12:42.442364 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 07:12:42.442372 | orchestrator | Friday 19 September 2025 07:06:47 +0000 (0:00:00.326) 0:05:20.736 ****** 2025-09-19 07:12:42.442381 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.442389 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.442397 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.442406 | orchestrator | 2025-09-19 07:12:42.442415 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 07:12:42.442424 | orchestrator | Friday 19 September 2025 07:06:48 +0000 (0:00:00.657) 0:05:21.393 ****** 2025-09-19 07:12:42.442434 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.442443 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.442452 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.442461 | orchestrator | 2025-09-19 07:12:42.442467 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-19 07:12:42.442479 | orchestrator | Friday 19 September 2025 07:06:48 +0000 (0:00:00.552) 0:05:21.946 ****** 2025-09-19 07:12:42.442488 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 07:12:42.442497 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:12:42.442505 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:12:42.442515 | orchestrator | 2025-09-19 07:12:42.442524 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-19 07:12:42.442534 | orchestrator | Friday 19 September 2025 07:06:49 +0000 (0:00:00.960) 0:05:22.906 ****** 2025-09-19 07:12:42.442543 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.442548 | orchestrator | 2025-09-19 07:12:42.442554 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-19 07:12:42.442559 | orchestrator | Friday 19 September 2025 07:06:50 +0000 (0:00:00.940) 0:05:23.846 ****** 2025-09-19 07:12:42.442564 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.442570 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.442575 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.442580 | orchestrator | 2025-09-19 07:12:42.442586 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-19 07:12:42.442595 | orchestrator | Friday 19 September 2025 07:06:51 +0000 (0:00:00.788) 0:05:24.635 ****** 2025-09-19 07:12:42.442600 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.442605 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.442611 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.442616 | orchestrator | 2025-09-19 07:12:42.442621 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-19 07:12:42.442627 | orchestrator | Friday 19 September 2025 07:06:51 +0000 (0:00:00.304) 0:05:24.940 ****** 2025-09-19 07:12:42.442632 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:12:42.442637 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:12:42.442643 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:12:42.442648 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-19 07:12:42.442653 | orchestrator | 2025-09-19 07:12:42.442659 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-19 07:12:42.442664 | orchestrator | Friday 19 September 2025 07:07:02 +0000 (0:00:10.652) 0:05:35.592 ****** 2025-09-19 07:12:42.442669 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.442675 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.442680 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.442685 | orchestrator | 2025-09-19 07:12:42.442690 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-19 07:12:42.442696 | orchestrator | Friday 19 September 2025 07:07:03 +0000 (0:00:00.769) 0:05:36.362 ****** 2025-09-19 07:12:42.442701 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 07:12:42.442706 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 07:12:42.442712 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 07:12:42.442717 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-19 07:12:42.442722 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:42.442728 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:42.442733 | orchestrator | 2025-09-19 07:12:42.442762 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-19 07:12:42.442769 | orchestrator | Friday 19 September 2025 07:07:05 +0000 (0:00:02.466) 0:05:38.829 ****** 2025-09-19 07:12:42.442774 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 07:12:42.442780 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 07:12:42.442785 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 07:12:42.442798 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:12:42.442803 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-19 07:12:42.442808 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-19 07:12:42.442813 | orchestrator | 2025-09-19 07:12:42.442819 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-19 07:12:42.442824 | orchestrator | Friday 19 September 2025 07:07:06 +0000 (0:00:01.239) 0:05:40.069 ****** 2025-09-19 07:12:42.442830 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.442835 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.442840 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.442861 | orchestrator | 2025-09-19 07:12:42.442867 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-19 07:12:42.442873 | orchestrator | Friday 19 September 2025 07:07:07 +0000 (0:00:00.713) 0:05:40.782 ****** 2025-09-19 07:12:42.442878 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.442884 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.442889 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.442894 | orchestrator | 2025-09-19 07:12:42.442900 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-19 07:12:42.442905 | orchestrator | Friday 19 September 2025 07:07:07 +0000 (0:00:00.316) 0:05:41.098 ****** 2025-09-19 07:12:42.442910 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.442916 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.442921 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.442926 | orchestrator | 2025-09-19 07:12:42.442932 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-19 07:12:42.442937 | orchestrator | Friday 19 September 2025 07:07:08 +0000 (0:00:00.659) 0:05:41.758 ****** 2025-09-19 07:12:42.442942 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.442948 | orchestrator | 2025-09-19 07:12:42.442953 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-19 07:12:42.442958 | orchestrator | Friday 19 September 2025 07:07:09 +0000 (0:00:00.556) 0:05:42.315 ****** 2025-09-19 07:12:42.442964 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.442969 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.442974 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.442980 | orchestrator | 2025-09-19 07:12:42.442985 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-19 07:12:42.442990 | orchestrator | Friday 19 September 2025 07:07:09 +0000 (0:00:00.337) 0:05:42.652 ****** 2025-09-19 07:12:42.442997 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.443006 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.443015 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.443024 | orchestrator | 2025-09-19 07:12:42.443033 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-19 07:12:42.443042 | orchestrator | Friday 19 September 2025 07:07:10 +0000 (0:00:00.578) 0:05:43.231 ****** 2025-09-19 07:12:42.443051 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.443058 | orchestrator | 2025-09-19 07:12:42.443066 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-19 07:12:42.443074 | orchestrator | Friday 19 September 2025 07:07:10 +0000 (0:00:00.559) 0:05:43.790 ****** 2025-09-19 07:12:42.443082 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.443090 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.443099 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.443108 | orchestrator | 2025-09-19 07:12:42.443117 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-19 07:12:42.443132 | orchestrator | Friday 19 September 2025 07:07:11 +0000 (0:00:01.224) 0:05:45.015 ****** 2025-09-19 07:12:42.443141 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.443150 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.443166 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.443175 | orchestrator | 2025-09-19 07:12:42.443184 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-19 07:12:42.443194 | orchestrator | Friday 19 September 2025 07:07:13 +0000 (0:00:01.556) 0:05:46.571 ****** 2025-09-19 07:12:42.443203 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.443212 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.443221 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.443231 | orchestrator | 2025-09-19 07:12:42.443240 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-19 07:12:42.443249 | orchestrator | Friday 19 September 2025 07:07:15 +0000 (0:00:01.912) 0:05:48.484 ****** 2025-09-19 07:12:42.443258 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.443268 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.443277 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.443286 | orchestrator | 2025-09-19 07:12:42.443296 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-19 07:12:42.443305 | orchestrator | Friday 19 September 2025 07:07:17 +0000 (0:00:02.253) 0:05:50.737 ****** 2025-09-19 07:12:42.443314 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.443323 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.443332 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-19 07:12:42.443341 | orchestrator | 2025-09-19 07:12:42.443351 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-19 07:12:42.443360 | orchestrator | Friday 19 September 2025 07:07:17 +0000 (0:00:00.432) 0:05:51.169 ****** 2025-09-19 07:12:42.443370 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-19 07:12:42.443415 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-19 07:12:42.443426 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-19 07:12:42.443434 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-19 07:12:42.443442 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-19 07:12:42.443451 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:12:42.443460 | orchestrator | 2025-09-19 07:12:42.443469 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-19 07:12:42.443478 | orchestrator | Friday 19 September 2025 07:07:49 +0000 (0:00:31.193) 0:06:22.363 ****** 2025-09-19 07:12:42.443486 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:12:42.443495 | orchestrator | 2025-09-19 07:12:42.443505 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-19 07:12:42.443514 | orchestrator | Friday 19 September 2025 07:07:50 +0000 (0:00:01.415) 0:06:23.779 ****** 2025-09-19 07:12:42.443524 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.443533 | orchestrator | 2025-09-19 07:12:42.443543 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-19 07:12:42.443552 | orchestrator | Friday 19 September 2025 07:07:50 +0000 (0:00:00.371) 0:06:24.151 ****** 2025-09-19 07:12:42.443561 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.443571 | orchestrator | 2025-09-19 07:12:42.443580 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-19 07:12:42.443589 | orchestrator | Friday 19 September 2025 07:07:51 +0000 (0:00:00.145) 0:06:24.296 ****** 2025-09-19 07:12:42.443599 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-19 07:12:42.443608 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-19 07:12:42.443617 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-19 07:12:42.443632 | orchestrator | 2025-09-19 07:12:42.443642 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-19 07:12:42.443651 | orchestrator | Friday 19 September 2025 07:07:57 +0000 (0:00:06.661) 0:06:30.958 ****** 2025-09-19 07:12:42.443660 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-19 07:12:42.443670 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-19 07:12:42.443679 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-19 07:12:42.443688 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-19 07:12:42.443697 | orchestrator | 2025-09-19 07:12:42.443705 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 07:12:42.443714 | orchestrator | Friday 19 September 2025 07:08:02 +0000 (0:00:05.031) 0:06:35.990 ****** 2025-09-19 07:12:42.443722 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.443730 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.443740 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.443748 | orchestrator | 2025-09-19 07:12:42.443757 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-19 07:12:42.443766 | orchestrator | Friday 19 September 2025 07:08:03 +0000 (0:00:01.056) 0:06:37.046 ****** 2025-09-19 07:12:42.443775 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.443784 | orchestrator | 2025-09-19 07:12:42.443794 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-19 07:12:42.443803 | orchestrator | Friday 19 September 2025 07:08:04 +0000 (0:00:00.628) 0:06:37.675 ****** 2025-09-19 07:12:42.443817 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.443827 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.443837 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.443890 | orchestrator | 2025-09-19 07:12:42.443900 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-19 07:12:42.443910 | orchestrator | Friday 19 September 2025 07:08:04 +0000 (0:00:00.392) 0:06:38.067 ****** 2025-09-19 07:12:42.443919 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.443928 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.443937 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.443946 | orchestrator | 2025-09-19 07:12:42.443955 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-19 07:12:42.443965 | orchestrator | Friday 19 September 2025 07:08:06 +0000 (0:00:01.429) 0:06:39.497 ****** 2025-09-19 07:12:42.443974 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 07:12:42.443984 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 07:12:42.443993 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 07:12:42.444003 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.444013 | orchestrator | 2025-09-19 07:12:42.444022 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-19 07:12:42.444031 | orchestrator | Friday 19 September 2025 07:08:06 +0000 (0:00:00.614) 0:06:40.112 ****** 2025-09-19 07:12:42.444041 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.444050 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.444059 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.444068 | orchestrator | 2025-09-19 07:12:42.444077 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-19 07:12:42.444087 | orchestrator | 2025-09-19 07:12:42.444096 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 07:12:42.444106 | orchestrator | Friday 19 September 2025 07:08:07 +0000 (0:00:00.601) 0:06:40.713 ****** 2025-09-19 07:12:42.444115 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.444125 | orchestrator | 2025-09-19 07:12:42.444172 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 07:12:42.444189 | orchestrator | Friday 19 September 2025 07:08:08 +0000 (0:00:00.844) 0:06:41.557 ****** 2025-09-19 07:12:42.444199 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.444208 | orchestrator | 2025-09-19 07:12:42.444217 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 07:12:42.444227 | orchestrator | Friday 19 September 2025 07:08:08 +0000 (0:00:00.586) 0:06:42.144 ****** 2025-09-19 07:12:42.444236 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.444246 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.444256 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.444265 | orchestrator | 2025-09-19 07:12:42.444274 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 07:12:42.444284 | orchestrator | Friday 19 September 2025 07:08:09 +0000 (0:00:00.331) 0:06:42.476 ****** 2025-09-19 07:12:42.444293 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.444303 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.444312 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.444322 | orchestrator | 2025-09-19 07:12:42.444331 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 07:12:42.444339 | orchestrator | Friday 19 September 2025 07:08:10 +0000 (0:00:01.023) 0:06:43.500 ****** 2025-09-19 07:12:42.444348 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.444356 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.444366 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.444374 | orchestrator | 2025-09-19 07:12:42.444383 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 07:12:42.444393 | orchestrator | Friday 19 September 2025 07:08:11 +0000 (0:00:00.699) 0:06:44.199 ****** 2025-09-19 07:12:42.444402 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.444410 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.444419 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.444426 | orchestrator | 2025-09-19 07:12:42.444435 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 07:12:42.444443 | orchestrator | Friday 19 September 2025 07:08:11 +0000 (0:00:00.704) 0:06:44.904 ****** 2025-09-19 07:12:42.444450 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.444458 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.444466 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.444474 | orchestrator | 2025-09-19 07:12:42.444483 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 07:12:42.444491 | orchestrator | Friday 19 September 2025 07:08:12 +0000 (0:00:00.331) 0:06:45.235 ****** 2025-09-19 07:12:42.444499 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.444507 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.444515 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.444524 | orchestrator | 2025-09-19 07:12:42.444532 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 07:12:42.444540 | orchestrator | Friday 19 September 2025 07:08:12 +0000 (0:00:00.567) 0:06:45.802 ****** 2025-09-19 07:12:42.444549 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.444557 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.444565 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.444573 | orchestrator | 2025-09-19 07:12:42.444582 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 07:12:42.444590 | orchestrator | Friday 19 September 2025 07:08:12 +0000 (0:00:00.318) 0:06:46.121 ****** 2025-09-19 07:12:42.444598 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.444607 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.444615 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.444623 | orchestrator | 2025-09-19 07:12:42.444631 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 07:12:42.444640 | orchestrator | Friday 19 September 2025 07:08:13 +0000 (0:00:00.695) 0:06:46.816 ****** 2025-09-19 07:12:42.444648 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.444662 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.444670 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.444678 | orchestrator | 2025-09-19 07:12:42.444687 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 07:12:42.444695 | orchestrator | Friday 19 September 2025 07:08:14 +0000 (0:00:00.667) 0:06:47.483 ****** 2025-09-19 07:12:42.444703 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.444711 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.444720 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.444728 | orchestrator | 2025-09-19 07:12:42.444736 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 07:12:42.444744 | orchestrator | Friday 19 September 2025 07:08:14 +0000 (0:00:00.597) 0:06:48.081 ****** 2025-09-19 07:12:42.444753 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.444761 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.444769 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.444778 | orchestrator | 2025-09-19 07:12:42.444786 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 07:12:42.444794 | orchestrator | Friday 19 September 2025 07:08:15 +0000 (0:00:00.327) 0:06:48.408 ****** 2025-09-19 07:12:42.444803 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.444811 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.444819 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.444828 | orchestrator | 2025-09-19 07:12:42.444836 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 07:12:42.444860 | orchestrator | Friday 19 September 2025 07:08:15 +0000 (0:00:00.331) 0:06:48.740 ****** 2025-09-19 07:12:42.444869 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.444878 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.444886 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.444896 | orchestrator | 2025-09-19 07:12:42.444905 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 07:12:42.444932 | orchestrator | Friday 19 September 2025 07:08:15 +0000 (0:00:00.353) 0:06:49.093 ****** 2025-09-19 07:12:42.444940 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.444950 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.444958 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.444967 | orchestrator | 2025-09-19 07:12:42.444982 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 07:12:42.444990 | orchestrator | Friday 19 September 2025 07:08:16 +0000 (0:00:00.649) 0:06:49.743 ****** 2025-09-19 07:12:42.444999 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.445007 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.445016 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.445024 | orchestrator | 2025-09-19 07:12:42.445032 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 07:12:42.445041 | orchestrator | Friday 19 September 2025 07:08:16 +0000 (0:00:00.321) 0:06:50.064 ****** 2025-09-19 07:12:42.445049 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.445058 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.445066 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.445074 | orchestrator | 2025-09-19 07:12:42.445084 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 07:12:42.445092 | orchestrator | Friday 19 September 2025 07:08:17 +0000 (0:00:00.286) 0:06:50.351 ****** 2025-09-19 07:12:42.445100 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.445109 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.445117 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.445124 | orchestrator | 2025-09-19 07:12:42.445132 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 07:12:42.445141 | orchestrator | Friday 19 September 2025 07:08:17 +0000 (0:00:00.310) 0:06:50.662 ****** 2025-09-19 07:12:42.445151 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.445159 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.445174 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.445183 | orchestrator | 2025-09-19 07:12:42.445192 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 07:12:42.445201 | orchestrator | Friday 19 September 2025 07:08:18 +0000 (0:00:00.597) 0:06:51.260 ****** 2025-09-19 07:12:42.445209 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.445218 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.445227 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.445238 | orchestrator | 2025-09-19 07:12:42.445247 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-19 07:12:42.445256 | orchestrator | Friday 19 September 2025 07:08:18 +0000 (0:00:00.553) 0:06:51.813 ****** 2025-09-19 07:12:42.445265 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.445274 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.445283 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.445292 | orchestrator | 2025-09-19 07:12:42.445301 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-19 07:12:42.445310 | orchestrator | Friday 19 September 2025 07:08:18 +0000 (0:00:00.321) 0:06:52.134 ****** 2025-09-19 07:12:42.445318 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 07:12:42.445327 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:12:42.445335 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:12:42.445343 | orchestrator | 2025-09-19 07:12:42.445352 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-19 07:12:42.445360 | orchestrator | Friday 19 September 2025 07:08:19 +0000 (0:00:00.956) 0:06:53.091 ****** 2025-09-19 07:12:42.445369 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.445378 | orchestrator | 2025-09-19 07:12:42.445386 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-19 07:12:42.445394 | orchestrator | Friday 19 September 2025 07:08:20 +0000 (0:00:00.816) 0:06:53.907 ****** 2025-09-19 07:12:42.445402 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.445411 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.445419 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.445428 | orchestrator | 2025-09-19 07:12:42.445436 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-19 07:12:42.445444 | orchestrator | Friday 19 September 2025 07:08:21 +0000 (0:00:00.332) 0:06:54.240 ****** 2025-09-19 07:12:42.445456 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.445464 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.445472 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.445480 | orchestrator | 2025-09-19 07:12:42.445489 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-19 07:12:42.445497 | orchestrator | Friday 19 September 2025 07:08:21 +0000 (0:00:00.298) 0:06:54.539 ****** 2025-09-19 07:12:42.445505 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.445514 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.445522 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.445530 | orchestrator | 2025-09-19 07:12:42.445539 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-19 07:12:42.445547 | orchestrator | Friday 19 September 2025 07:08:22 +0000 (0:00:00.934) 0:06:55.473 ****** 2025-09-19 07:12:42.445556 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.445565 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.445573 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.445583 | orchestrator | 2025-09-19 07:12:42.445592 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-19 07:12:42.445600 | orchestrator | Friday 19 September 2025 07:08:22 +0000 (0:00:00.348) 0:06:55.822 ****** 2025-09-19 07:12:42.445609 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 07:12:42.445619 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 07:12:42.445633 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 07:12:42.445643 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 07:12:42.445652 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 07:12:42.445661 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 07:12:42.445677 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 07:12:42.445686 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 07:12:42.445695 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 07:12:42.445704 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 07:12:42.445713 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 07:12:42.445722 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 07:12:42.445732 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 07:12:42.445741 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 07:12:42.445750 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 07:12:42.445759 | orchestrator | 2025-09-19 07:12:42.445768 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-19 07:12:42.445777 | orchestrator | Friday 19 September 2025 07:08:25 +0000 (0:00:03.156) 0:06:58.978 ****** 2025-09-19 07:12:42.445786 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.445795 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.445804 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.445813 | orchestrator | 2025-09-19 07:12:42.445822 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-19 07:12:42.445831 | orchestrator | Friday 19 September 2025 07:08:26 +0000 (0:00:00.309) 0:06:59.288 ****** 2025-09-19 07:12:42.445840 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.445864 | orchestrator | 2025-09-19 07:12:42.445872 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-19 07:12:42.445881 | orchestrator | Friday 19 September 2025 07:08:26 +0000 (0:00:00.811) 0:07:00.099 ****** 2025-09-19 07:12:42.445889 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 07:12:42.445897 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 07:12:42.445905 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 07:12:42.445913 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-19 07:12:42.445922 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-19 07:12:42.445930 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-19 07:12:42.445938 | orchestrator | 2025-09-19 07:12:42.445947 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-19 07:12:42.445955 | orchestrator | Friday 19 September 2025 07:08:27 +0000 (0:00:01.000) 0:07:01.099 ****** 2025-09-19 07:12:42.445963 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:42.445972 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 07:12:42.445980 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 07:12:42.445988 | orchestrator | 2025-09-19 07:12:42.445997 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-19 07:12:42.446005 | orchestrator | Friday 19 September 2025 07:08:30 +0000 (0:00:02.281) 0:07:03.380 ****** 2025-09-19 07:12:42.446049 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 07:12:42.446060 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 07:12:42.446070 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.446079 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 07:12:42.446088 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 07:12:42.446101 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.446110 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 07:12:42.446119 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 07:12:42.446129 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.446138 | orchestrator | 2025-09-19 07:12:42.446147 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-19 07:12:42.446155 | orchestrator | Friday 19 September 2025 07:08:31 +0000 (0:00:01.528) 0:07:04.909 ****** 2025-09-19 07:12:42.446165 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:12:42.446174 | orchestrator | 2025-09-19 07:12:42.446183 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-19 07:12:42.446192 | orchestrator | Friday 19 September 2025 07:08:33 +0000 (0:00:02.207) 0:07:07.117 ****** 2025-09-19 07:12:42.446201 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.446210 | orchestrator | 2025-09-19 07:12:42.446219 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-19 07:12:42.446228 | orchestrator | Friday 19 September 2025 07:08:34 +0000 (0:00:00.603) 0:07:07.720 ****** 2025-09-19 07:12:42.446237 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-deb73447-54c2-58c6-89f8-2e63b50c59b2', 'data_vg': 'ceph-deb73447-54c2-58c6-89f8-2e63b50c59b2'}) 2025-09-19 07:12:42.446247 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d4db71fd-07e0-550b-b185-dcfd36a5307b', 'data_vg': 'ceph-d4db71fd-07e0-550b-b185-dcfd36a5307b'}) 2025-09-19 07:12:42.446256 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-05a06e17-0162-5722-bf4c-f18a4cab61c7', 'data_vg': 'ceph-05a06e17-0162-5722-bf4c-f18a4cab61c7'}) 2025-09-19 07:12:42.446271 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1', 'data_vg': 'ceph-6d43fc0f-0470-50ff-9d43-3faecb8a0ab1'}) 2025-09-19 07:12:42.446281 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a0c5dfb3-0a46-5f65-b869-b08108365918', 'data_vg': 'ceph-a0c5dfb3-0a46-5f65-b869-b08108365918'}) 2025-09-19 07:12:42.446290 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-caff573e-485a-5d29-90dc-90eefd21fd68', 'data_vg': 'ceph-caff573e-485a-5d29-90dc-90eefd21fd68'}) 2025-09-19 07:12:42.446299 | orchestrator | 2025-09-19 07:12:42.446308 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-19 07:12:42.446316 | orchestrator | Friday 19 September 2025 07:09:16 +0000 (0:00:41.740) 0:07:49.460 ****** 2025-09-19 07:12:42.446326 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.446335 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.446344 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.446353 | orchestrator | 2025-09-19 07:12:42.446362 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-19 07:12:42.446370 | orchestrator | Friday 19 September 2025 07:09:16 +0000 (0:00:00.674) 0:07:50.134 ****** 2025-09-19 07:12:42.446380 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.446389 | orchestrator | 2025-09-19 07:12:42.446398 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-19 07:12:42.446406 | orchestrator | Friday 19 September 2025 07:09:17 +0000 (0:00:00.607) 0:07:50.742 ****** 2025-09-19 07:12:42.446416 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.446425 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.446439 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.446448 | orchestrator | 2025-09-19 07:12:42.446456 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-19 07:12:42.446465 | orchestrator | Friday 19 September 2025 07:09:18 +0000 (0:00:00.671) 0:07:51.414 ****** 2025-09-19 07:12:42.446474 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.446483 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.446492 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.446501 | orchestrator | 2025-09-19 07:12:42.446510 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-19 07:12:42.446519 | orchestrator | Friday 19 September 2025 07:09:21 +0000 (0:00:02.890) 0:07:54.304 ****** 2025-09-19 07:12:42.446527 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.446536 | orchestrator | 2025-09-19 07:12:42.446545 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-19 07:12:42.446553 | orchestrator | Friday 19 September 2025 07:09:21 +0000 (0:00:00.523) 0:07:54.828 ****** 2025-09-19 07:12:42.446561 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.446571 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.446580 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.446589 | orchestrator | 2025-09-19 07:12:42.446598 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-19 07:12:42.446607 | orchestrator | Friday 19 September 2025 07:09:22 +0000 (0:00:01.163) 0:07:55.992 ****** 2025-09-19 07:12:42.446616 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.446626 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.446635 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.446644 | orchestrator | 2025-09-19 07:12:42.446653 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-19 07:12:42.446661 | orchestrator | Friday 19 September 2025 07:09:24 +0000 (0:00:01.459) 0:07:57.451 ****** 2025-09-19 07:12:42.446671 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.446680 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.446689 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.446698 | orchestrator | 2025-09-19 07:12:42.446707 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-19 07:12:42.446719 | orchestrator | Friday 19 September 2025 07:09:25 +0000 (0:00:01.721) 0:07:59.173 ****** 2025-09-19 07:12:42.446728 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.446738 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.446747 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.446756 | orchestrator | 2025-09-19 07:12:42.446765 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-19 07:12:42.446774 | orchestrator | Friday 19 September 2025 07:09:26 +0000 (0:00:00.377) 0:07:59.550 ****** 2025-09-19 07:12:42.446783 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.446792 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.446801 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.446810 | orchestrator | 2025-09-19 07:12:42.446819 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-19 07:12:42.446828 | orchestrator | Friday 19 September 2025 07:09:26 +0000 (0:00:00.358) 0:07:59.909 ****** 2025-09-19 07:12:42.446836 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 07:12:42.446857 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-09-19 07:12:42.446866 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-09-19 07:12:42.446874 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-09-19 07:12:42.446882 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-09-19 07:12:42.446890 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-09-19 07:12:42.446898 | orchestrator | 2025-09-19 07:12:42.446906 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-19 07:12:42.446915 | orchestrator | Friday 19 September 2025 07:09:27 +0000 (0:00:01.259) 0:08:01.169 ****** 2025-09-19 07:12:42.446933 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-19 07:12:42.446941 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-19 07:12:42.446949 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-19 07:12:42.446957 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-19 07:12:42.446966 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-19 07:12:42.446974 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-19 07:12:42.446982 | orchestrator | 2025-09-19 07:12:42.446996 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-19 07:12:42.447005 | orchestrator | Friday 19 September 2025 07:09:30 +0000 (0:00:02.370) 0:08:03.539 ****** 2025-09-19 07:12:42.447014 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-19 07:12:42.447022 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-19 07:12:42.447030 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-19 07:12:42.447038 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-19 07:12:42.447047 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-19 07:12:42.447055 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-19 07:12:42.447063 | orchestrator | 2025-09-19 07:12:42.447072 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-19 07:12:42.447080 | orchestrator | Friday 19 September 2025 07:09:33 +0000 (0:00:03.569) 0:08:07.108 ****** 2025-09-19 07:12:42.447089 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447097 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.447106 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:12:42.447114 | orchestrator | 2025-09-19 07:12:42.447122 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-19 07:12:42.447131 | orchestrator | Friday 19 September 2025 07:09:37 +0000 (0:00:03.127) 0:08:10.236 ****** 2025-09-19 07:12:42.447139 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447147 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.447156 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-19 07:12:42.447164 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:12:42.447173 | orchestrator | 2025-09-19 07:12:42.447181 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-19 07:12:42.447190 | orchestrator | Friday 19 September 2025 07:09:50 +0000 (0:00:13.114) 0:08:23.350 ****** 2025-09-19 07:12:42.447198 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447206 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.447214 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.447223 | orchestrator | 2025-09-19 07:12:42.447231 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 07:12:42.447239 | orchestrator | Friday 19 September 2025 07:09:51 +0000 (0:00:00.871) 0:08:24.221 ****** 2025-09-19 07:12:42.447248 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447256 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.447264 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.447272 | orchestrator | 2025-09-19 07:12:42.447280 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-19 07:12:42.447289 | orchestrator | Friday 19 September 2025 07:09:51 +0000 (0:00:00.612) 0:08:24.834 ****** 2025-09-19 07:12:42.447297 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.447305 | orchestrator | 2025-09-19 07:12:42.447314 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-19 07:12:42.447322 | orchestrator | Friday 19 September 2025 07:09:52 +0000 (0:00:00.524) 0:08:25.358 ****** 2025-09-19 07:12:42.447330 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:42.447339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:42.447348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:42.447362 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447370 | orchestrator | 2025-09-19 07:12:42.447379 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-19 07:12:42.447387 | orchestrator | Friday 19 September 2025 07:09:52 +0000 (0:00:00.418) 0:08:25.777 ****** 2025-09-19 07:12:42.447395 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447403 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.447411 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.447420 | orchestrator | 2025-09-19 07:12:42.447432 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-19 07:12:42.447441 | orchestrator | Friday 19 September 2025 07:09:52 +0000 (0:00:00.348) 0:08:26.125 ****** 2025-09-19 07:12:42.447448 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447456 | orchestrator | 2025-09-19 07:12:42.447464 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-19 07:12:42.447472 | orchestrator | Friday 19 September 2025 07:09:53 +0000 (0:00:00.237) 0:08:26.363 ****** 2025-09-19 07:12:42.447480 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447489 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.447497 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.447505 | orchestrator | 2025-09-19 07:12:42.447513 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-19 07:12:42.447521 | orchestrator | Friday 19 September 2025 07:09:53 +0000 (0:00:00.585) 0:08:26.949 ****** 2025-09-19 07:12:42.447530 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447538 | orchestrator | 2025-09-19 07:12:42.447547 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-19 07:12:42.447555 | orchestrator | Friday 19 September 2025 07:09:54 +0000 (0:00:00.286) 0:08:27.235 ****** 2025-09-19 07:12:42.447564 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447572 | orchestrator | 2025-09-19 07:12:42.447581 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-19 07:12:42.447589 | orchestrator | Friday 19 September 2025 07:09:54 +0000 (0:00:00.209) 0:08:27.445 ****** 2025-09-19 07:12:42.447597 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447606 | orchestrator | 2025-09-19 07:12:42.447614 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-19 07:12:42.447622 | orchestrator | Friday 19 September 2025 07:09:54 +0000 (0:00:00.128) 0:08:27.573 ****** 2025-09-19 07:12:42.447630 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447639 | orchestrator | 2025-09-19 07:12:42.447647 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-19 07:12:42.447656 | orchestrator | Friday 19 September 2025 07:09:54 +0000 (0:00:00.224) 0:08:27.798 ****** 2025-09-19 07:12:42.447669 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447678 | orchestrator | 2025-09-19 07:12:42.447686 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-19 07:12:42.447694 | orchestrator | Friday 19 September 2025 07:09:54 +0000 (0:00:00.219) 0:08:28.017 ****** 2025-09-19 07:12:42.447702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:42.447710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:42.447719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:42.447727 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447735 | orchestrator | 2025-09-19 07:12:42.447743 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-19 07:12:42.447752 | orchestrator | Friday 19 September 2025 07:09:55 +0000 (0:00:00.370) 0:08:28.387 ****** 2025-09-19 07:12:42.447760 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447768 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.447777 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.447785 | orchestrator | 2025-09-19 07:12:42.447793 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-19 07:12:42.447801 | orchestrator | Friday 19 September 2025 07:09:55 +0000 (0:00:00.307) 0:08:28.694 ****** 2025-09-19 07:12:42.447814 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447822 | orchestrator | 2025-09-19 07:12:42.447830 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-19 07:12:42.447839 | orchestrator | Friday 19 September 2025 07:09:56 +0000 (0:00:00.897) 0:08:29.592 ****** 2025-09-19 07:12:42.447881 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.447890 | orchestrator | 2025-09-19 07:12:42.447898 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-19 07:12:42.447906 | orchestrator | 2025-09-19 07:12:42.447915 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 07:12:42.447923 | orchestrator | Friday 19 September 2025 07:09:57 +0000 (0:00:00.725) 0:08:30.317 ****** 2025-09-19 07:12:42.447931 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.447940 | orchestrator | 2025-09-19 07:12:42.447948 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 07:12:42.447956 | orchestrator | Friday 19 September 2025 07:09:58 +0000 (0:00:01.251) 0:08:31.569 ****** 2025-09-19 07:12:42.447965 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.447973 | orchestrator | 2025-09-19 07:12:42.447982 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 07:12:42.447990 | orchestrator | Friday 19 September 2025 07:09:59 +0000 (0:00:01.342) 0:08:32.911 ****** 2025-09-19 07:12:42.447998 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.448007 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.448015 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.448023 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.448032 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.448040 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.448048 | orchestrator | 2025-09-19 07:12:42.448056 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 07:12:42.448065 | orchestrator | Friday 19 September 2025 07:10:01 +0000 (0:00:01.440) 0:08:34.352 ****** 2025-09-19 07:12:42.448073 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.448081 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.448090 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.448098 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.448106 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.448114 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.448123 | orchestrator | 2025-09-19 07:12:42.448131 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 07:12:42.448143 | orchestrator | Friday 19 September 2025 07:10:01 +0000 (0:00:00.807) 0:08:35.160 ****** 2025-09-19 07:12:42.448151 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.448160 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.448168 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.448176 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.448185 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.448193 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.448201 | orchestrator | 2025-09-19 07:12:42.448210 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 07:12:42.448218 | orchestrator | Friday 19 September 2025 07:10:02 +0000 (0:00:00.949) 0:08:36.110 ****** 2025-09-19 07:12:42.448226 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.448234 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.448242 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.448250 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.448259 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.448267 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.448275 | orchestrator | 2025-09-19 07:12:42.448289 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 07:12:42.448297 | orchestrator | Friday 19 September 2025 07:10:03 +0000 (0:00:00.748) 0:08:36.858 ****** 2025-09-19 07:12:42.448306 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.448314 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.448322 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.448330 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.448339 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.448347 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.448355 | orchestrator | 2025-09-19 07:12:42.448363 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 07:12:42.448372 | orchestrator | Friday 19 September 2025 07:10:04 +0000 (0:00:01.027) 0:08:37.885 ****** 2025-09-19 07:12:42.448380 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.448389 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.448397 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.448405 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.448413 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.448426 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.448434 | orchestrator | 2025-09-19 07:12:42.448441 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 07:12:42.448449 | orchestrator | Friday 19 September 2025 07:10:05 +0000 (0:00:00.895) 0:08:38.781 ****** 2025-09-19 07:12:42.448456 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.448464 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.448471 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.448479 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.448487 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.448494 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.448502 | orchestrator | 2025-09-19 07:12:42.448510 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 07:12:42.448518 | orchestrator | Friday 19 September 2025 07:10:06 +0000 (0:00:00.589) 0:08:39.371 ****** 2025-09-19 07:12:42.448525 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.448533 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.448541 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.448548 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.448556 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.448564 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.448572 | orchestrator | 2025-09-19 07:12:42.448580 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 07:12:42.448588 | orchestrator | Friday 19 September 2025 07:10:07 +0000 (0:00:01.343) 0:08:40.715 ****** 2025-09-19 07:12:42.448595 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.448603 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.448611 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.448618 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.448626 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.448634 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.448642 | orchestrator | 2025-09-19 07:12:42.448650 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 07:12:42.448658 | orchestrator | Friday 19 September 2025 07:10:08 +0000 (0:00:00.957) 0:08:41.672 ****** 2025-09-19 07:12:42.448666 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.448674 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.448682 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.448690 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.448698 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.448705 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.448713 | orchestrator | 2025-09-19 07:12:42.448721 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 07:12:42.448729 | orchestrator | Friday 19 September 2025 07:10:09 +0000 (0:00:00.702) 0:08:42.375 ****** 2025-09-19 07:12:42.448737 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.448749 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.448757 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.448765 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.448773 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.448780 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.448788 | orchestrator | 2025-09-19 07:12:42.448796 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 07:12:42.448804 | orchestrator | Friday 19 September 2025 07:10:09 +0000 (0:00:00.579) 0:08:42.955 ****** 2025-09-19 07:12:42.448812 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.448820 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.448828 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.448836 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.448855 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.448863 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.448871 | orchestrator | 2025-09-19 07:12:42.448879 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 07:12:42.448887 | orchestrator | Friday 19 September 2025 07:10:10 +0000 (0:00:00.763) 0:08:43.719 ****** 2025-09-19 07:12:42.448895 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.448902 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.448909 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.448916 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.448924 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.448932 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.448939 | orchestrator | 2025-09-19 07:12:42.448950 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 07:12:42.448958 | orchestrator | Friday 19 September 2025 07:10:11 +0000 (0:00:00.536) 0:08:44.256 ****** 2025-09-19 07:12:42.448965 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.448973 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.448981 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.448985 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.448990 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.448994 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.448999 | orchestrator | 2025-09-19 07:12:42.449003 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 07:12:42.449008 | orchestrator | Friday 19 September 2025 07:10:11 +0000 (0:00:00.733) 0:08:44.989 ****** 2025-09-19 07:12:42.449013 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.449017 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.449021 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.449026 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.449030 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.449035 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.449039 | orchestrator | 2025-09-19 07:12:42.449044 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 07:12:42.449048 | orchestrator | Friday 19 September 2025 07:10:12 +0000 (0:00:00.509) 0:08:45.499 ****** 2025-09-19 07:12:42.449053 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.449057 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.449062 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.449066 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:12:42.449071 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:12:42.449075 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:12:42.449079 | orchestrator | 2025-09-19 07:12:42.449084 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 07:12:42.449089 | orchestrator | Friday 19 September 2025 07:10:13 +0000 (0:00:00.743) 0:08:46.242 ****** 2025-09-19 07:12:42.449093 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.449098 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.449102 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.449112 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.449116 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.449125 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.449129 | orchestrator | 2025-09-19 07:12:42.449134 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 07:12:42.449138 | orchestrator | Friday 19 September 2025 07:10:13 +0000 (0:00:00.528) 0:08:46.770 ****** 2025-09-19 07:12:42.449143 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.449147 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.449152 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.449156 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.449161 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.449165 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.449170 | orchestrator | 2025-09-19 07:12:42.449174 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 07:12:42.449179 | orchestrator | Friday 19 September 2025 07:10:14 +0000 (0:00:00.791) 0:08:47.562 ****** 2025-09-19 07:12:42.449183 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.449188 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.449192 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.449197 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.449201 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.449206 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.449210 | orchestrator | 2025-09-19 07:12:42.449214 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-19 07:12:42.449219 | orchestrator | Friday 19 September 2025 07:10:15 +0000 (0:00:01.231) 0:08:48.794 ****** 2025-09-19 07:12:42.449224 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:12:42.449228 | orchestrator | 2025-09-19 07:12:42.449233 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-19 07:12:42.449237 | orchestrator | Friday 19 September 2025 07:10:19 +0000 (0:00:04.170) 0:08:52.964 ****** 2025-09-19 07:12:42.449242 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:12:42.449246 | orchestrator | 2025-09-19 07:12:42.449251 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-19 07:12:42.449255 | orchestrator | Friday 19 September 2025 07:10:22 +0000 (0:00:02.260) 0:08:55.225 ****** 2025-09-19 07:12:42.449260 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.449264 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.449269 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.449273 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.449278 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.449282 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.449287 | orchestrator | 2025-09-19 07:12:42.449292 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-19 07:12:42.449296 | orchestrator | Friday 19 September 2025 07:10:23 +0000 (0:00:01.457) 0:08:56.682 ****** 2025-09-19 07:12:42.449301 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.449305 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.449310 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.449318 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.449325 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.449333 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.449340 | orchestrator | 2025-09-19 07:12:42.449348 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-19 07:12:42.449355 | orchestrator | Friday 19 September 2025 07:10:24 +0000 (0:00:01.349) 0:08:58.032 ****** 2025-09-19 07:12:42.449363 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.449371 | orchestrator | 2025-09-19 07:12:42.449379 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-19 07:12:42.449388 | orchestrator | Friday 19 September 2025 07:10:26 +0000 (0:00:01.397) 0:08:59.430 ****** 2025-09-19 07:12:42.449396 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.449404 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.449416 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.449425 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.449429 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.449437 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.449441 | orchestrator | 2025-09-19 07:12:42.449446 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-19 07:12:42.449451 | orchestrator | Friday 19 September 2025 07:10:27 +0000 (0:00:01.587) 0:09:01.018 ****** 2025-09-19 07:12:42.449455 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.449460 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.449464 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.449468 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.449473 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.449477 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.449482 | orchestrator | 2025-09-19 07:12:42.449487 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-19 07:12:42.449491 | orchestrator | Friday 19 September 2025 07:10:31 +0000 (0:00:03.745) 0:09:04.763 ****** 2025-09-19 07:12:42.449496 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:12:42.449501 | orchestrator | 2025-09-19 07:12:42.449505 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-19 07:12:42.449510 | orchestrator | Friday 19 September 2025 07:10:33 +0000 (0:00:01.432) 0:09:06.196 ****** 2025-09-19 07:12:42.449514 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.449518 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.449523 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.449527 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.449532 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.449536 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.449540 | orchestrator | 2025-09-19 07:12:42.449545 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-19 07:12:42.449550 | orchestrator | Friday 19 September 2025 07:10:33 +0000 (0:00:00.802) 0:09:06.999 ****** 2025-09-19 07:12:42.449554 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.449559 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.449564 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:12:42.449573 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:12:42.449578 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.449583 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:12:42.449588 | orchestrator | 2025-09-19 07:12:42.449593 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-19 07:12:42.449599 | orchestrator | Friday 19 September 2025 07:10:36 +0000 (0:00:02.888) 0:09:09.887 ****** 2025-09-19 07:12:42.449604 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.449609 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.449614 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.449619 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:12:42.449623 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:12:42.449628 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:12:42.449634 | orchestrator | 2025-09-19 07:12:42.449639 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-19 07:12:42.449643 | orchestrator | 2025-09-19 07:12:42.449648 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 07:12:42.449653 | orchestrator | Friday 19 September 2025 07:10:37 +0000 (0:00:00.898) 0:09:10.786 ****** 2025-09-19 07:12:42.449659 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.449664 | orchestrator | 2025-09-19 07:12:42.449669 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 07:12:42.449674 | orchestrator | Friday 19 September 2025 07:10:38 +0000 (0:00:00.819) 0:09:11.605 ****** 2025-09-19 07:12:42.449679 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.449687 | orchestrator | 2025-09-19 07:12:42.449692 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 07:12:42.449698 | orchestrator | Friday 19 September 2025 07:10:38 +0000 (0:00:00.516) 0:09:12.122 ****** 2025-09-19 07:12:42.449703 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.449708 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.449713 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.449718 | orchestrator | 2025-09-19 07:12:42.449723 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 07:12:42.449728 | orchestrator | Friday 19 September 2025 07:10:39 +0000 (0:00:00.604) 0:09:12.727 ****** 2025-09-19 07:12:42.449733 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.449738 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.449744 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.449749 | orchestrator | 2025-09-19 07:12:42.449754 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 07:12:42.449759 | orchestrator | Friday 19 September 2025 07:10:40 +0000 (0:00:00.699) 0:09:13.426 ****** 2025-09-19 07:12:42.449764 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.449769 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.449774 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.449779 | orchestrator | 2025-09-19 07:12:42.449785 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 07:12:42.449790 | orchestrator | Friday 19 September 2025 07:10:40 +0000 (0:00:00.728) 0:09:14.155 ****** 2025-09-19 07:12:42.449795 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.449800 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.449805 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.449811 | orchestrator | 2025-09-19 07:12:42.449816 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 07:12:42.449821 | orchestrator | Friday 19 September 2025 07:10:41 +0000 (0:00:00.752) 0:09:14.907 ****** 2025-09-19 07:12:42.449826 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.449831 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.449836 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.449851 | orchestrator | 2025-09-19 07:12:42.449857 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 07:12:42.449862 | orchestrator | Friday 19 September 2025 07:10:42 +0000 (0:00:00.643) 0:09:15.551 ****** 2025-09-19 07:12:42.449867 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.449873 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.449880 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.449885 | orchestrator | 2025-09-19 07:12:42.449890 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 07:12:42.449895 | orchestrator | Friday 19 September 2025 07:10:42 +0000 (0:00:00.356) 0:09:15.907 ****** 2025-09-19 07:12:42.449901 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.449906 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.449911 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.449917 | orchestrator | 2025-09-19 07:12:42.449921 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 07:12:42.449926 | orchestrator | Friday 19 September 2025 07:10:43 +0000 (0:00:00.357) 0:09:16.265 ****** 2025-09-19 07:12:42.449930 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.449935 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.449939 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.449944 | orchestrator | 2025-09-19 07:12:42.449948 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 07:12:42.449953 | orchestrator | Friday 19 September 2025 07:10:43 +0000 (0:00:00.760) 0:09:17.025 ****** 2025-09-19 07:12:42.449957 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.449962 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.449966 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.449974 | orchestrator | 2025-09-19 07:12:42.449979 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 07:12:42.449983 | orchestrator | Friday 19 September 2025 07:10:44 +0000 (0:00:01.076) 0:09:18.101 ****** 2025-09-19 07:12:42.449988 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.449992 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.449997 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.450001 | orchestrator | 2025-09-19 07:12:42.450006 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 07:12:42.450010 | orchestrator | Friday 19 September 2025 07:10:45 +0000 (0:00:00.350) 0:09:18.452 ****** 2025-09-19 07:12:42.450030 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.450035 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.450039 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.450044 | orchestrator | 2025-09-19 07:12:42.450052 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 07:12:42.450057 | orchestrator | Friday 19 September 2025 07:10:45 +0000 (0:00:00.351) 0:09:18.803 ****** 2025-09-19 07:12:42.450062 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.450067 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.450071 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.450076 | orchestrator | 2025-09-19 07:12:42.450081 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 07:12:42.450085 | orchestrator | Friday 19 September 2025 07:10:45 +0000 (0:00:00.326) 0:09:19.130 ****** 2025-09-19 07:12:42.450090 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.450095 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.450099 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.450104 | orchestrator | 2025-09-19 07:12:42.450109 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 07:12:42.450113 | orchestrator | Friday 19 September 2025 07:10:46 +0000 (0:00:00.638) 0:09:19.769 ****** 2025-09-19 07:12:42.450118 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.450123 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.450127 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.450132 | orchestrator | 2025-09-19 07:12:42.450137 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 07:12:42.450142 | orchestrator | Friday 19 September 2025 07:10:46 +0000 (0:00:00.390) 0:09:20.160 ****** 2025-09-19 07:12:42.450146 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.450151 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.450156 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.450161 | orchestrator | 2025-09-19 07:12:42.450165 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 07:12:42.450170 | orchestrator | Friday 19 September 2025 07:10:47 +0000 (0:00:00.482) 0:09:20.643 ****** 2025-09-19 07:12:42.450175 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.450180 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.450184 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.450189 | orchestrator | 2025-09-19 07:12:42.450194 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 07:12:42.450198 | orchestrator | Friday 19 September 2025 07:10:47 +0000 (0:00:00.381) 0:09:21.024 ****** 2025-09-19 07:12:42.450203 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.450208 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.450212 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.450217 | orchestrator | 2025-09-19 07:12:42.450222 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 07:12:42.450226 | orchestrator | Friday 19 September 2025 07:10:48 +0000 (0:00:00.749) 0:09:21.774 ****** 2025-09-19 07:12:42.450231 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.450236 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.450240 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.450245 | orchestrator | 2025-09-19 07:12:42.450250 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 07:12:42.450259 | orchestrator | Friday 19 September 2025 07:10:48 +0000 (0:00:00.402) 0:09:22.176 ****** 2025-09-19 07:12:42.450263 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.450268 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.450273 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.450277 | orchestrator | 2025-09-19 07:12:42.450282 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-19 07:12:42.450287 | orchestrator | Friday 19 September 2025 07:10:49 +0000 (0:00:00.611) 0:09:22.788 ****** 2025-09-19 07:12:42.450291 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.450296 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.450301 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-19 07:12:42.450306 | orchestrator | 2025-09-19 07:12:42.450310 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-19 07:12:42.450315 | orchestrator | Friday 19 September 2025 07:10:50 +0000 (0:00:00.743) 0:09:23.532 ****** 2025-09-19 07:12:42.450320 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:12:42.450325 | orchestrator | 2025-09-19 07:12:42.450333 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-19 07:12:42.450338 | orchestrator | Friday 19 September 2025 07:10:52 +0000 (0:00:02.231) 0:09:25.764 ****** 2025-09-19 07:12:42.450344 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-19 07:12:42.450350 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.450354 | orchestrator | 2025-09-19 07:12:42.450359 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-19 07:12:42.450364 | orchestrator | Friday 19 September 2025 07:10:52 +0000 (0:00:00.250) 0:09:26.015 ****** 2025-09-19 07:12:42.450370 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 07:12:42.450379 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 07:12:42.450383 | orchestrator | 2025-09-19 07:12:42.450388 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-19 07:12:42.450393 | orchestrator | Friday 19 September 2025 07:11:01 +0000 (0:00:08.392) 0:09:34.407 ****** 2025-09-19 07:12:42.450398 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:12:42.450403 | orchestrator | 2025-09-19 07:12:42.450410 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-19 07:12:42.450415 | orchestrator | Friday 19 September 2025 07:11:04 +0000 (0:00:03.735) 0:09:38.143 ****** 2025-09-19 07:12:42.450420 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.450424 | orchestrator | 2025-09-19 07:12:42.450429 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-19 07:12:42.450434 | orchestrator | Friday 19 September 2025 07:11:06 +0000 (0:00:01.110) 0:09:39.253 ****** 2025-09-19 07:12:42.450438 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 07:12:42.450443 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 07:12:42.450448 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 07:12:42.450452 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-19 07:12:42.450457 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-19 07:12:42.450465 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-19 07:12:42.450470 | orchestrator | 2025-09-19 07:12:42.450475 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-19 07:12:42.450479 | orchestrator | Friday 19 September 2025 07:11:07 +0000 (0:00:01.285) 0:09:40.539 ****** 2025-09-19 07:12:42.450484 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:42.450489 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 07:12:42.450493 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 07:12:42.450498 | orchestrator | 2025-09-19 07:12:42.450503 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-19 07:12:42.450508 | orchestrator | Friday 19 September 2025 07:11:09 +0000 (0:00:02.282) 0:09:42.822 ****** 2025-09-19 07:12:42.450512 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 07:12:42.450517 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 07:12:42.450522 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.450527 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 07:12:42.450531 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 07:12:42.450536 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.450541 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 07:12:42.450545 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 07:12:42.450550 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.450555 | orchestrator | 2025-09-19 07:12:42.450559 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-19 07:12:42.450564 | orchestrator | Friday 19 September 2025 07:11:10 +0000 (0:00:01.257) 0:09:44.079 ****** 2025-09-19 07:12:42.450569 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.450573 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.450578 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.450583 | orchestrator | 2025-09-19 07:12:42.450588 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-19 07:12:42.450592 | orchestrator | Friday 19 September 2025 07:11:13 +0000 (0:00:02.632) 0:09:46.711 ****** 2025-09-19 07:12:42.450597 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.450602 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.450606 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.450611 | orchestrator | 2025-09-19 07:12:42.450616 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-19 07:12:42.450621 | orchestrator | Friday 19 September 2025 07:11:14 +0000 (0:00:00.591) 0:09:47.303 ****** 2025-09-19 07:12:42.450625 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.450630 | orchestrator | 2025-09-19 07:12:42.450637 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-19 07:12:42.450641 | orchestrator | Friday 19 September 2025 07:11:14 +0000 (0:00:00.558) 0:09:47.861 ****** 2025-09-19 07:12:42.450646 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.450651 | orchestrator | 2025-09-19 07:12:42.450655 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-19 07:12:42.450660 | orchestrator | Friday 19 September 2025 07:11:15 +0000 (0:00:00.769) 0:09:48.630 ****** 2025-09-19 07:12:42.450665 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.450670 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.450674 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.450679 | orchestrator | 2025-09-19 07:12:42.450684 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-19 07:12:42.450688 | orchestrator | Friday 19 September 2025 07:11:16 +0000 (0:00:01.290) 0:09:49.921 ****** 2025-09-19 07:12:42.450693 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.450700 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.450705 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.450710 | orchestrator | 2025-09-19 07:12:42.450715 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-19 07:12:42.450719 | orchestrator | Friday 19 September 2025 07:11:17 +0000 (0:00:01.202) 0:09:51.123 ****** 2025-09-19 07:12:42.450724 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.450729 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.450734 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.450738 | orchestrator | 2025-09-19 07:12:42.450743 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-19 07:12:42.450748 | orchestrator | Friday 19 September 2025 07:11:19 +0000 (0:00:01.644) 0:09:52.768 ****** 2025-09-19 07:12:42.450753 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.450757 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.450762 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.450767 | orchestrator | 2025-09-19 07:12:42.450774 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-19 07:12:42.450779 | orchestrator | Friday 19 September 2025 07:11:21 +0000 (0:00:02.223) 0:09:54.992 ****** 2025-09-19 07:12:42.450784 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.450788 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.450793 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.450798 | orchestrator | 2025-09-19 07:12:42.450803 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 07:12:42.450807 | orchestrator | Friday 19 September 2025 07:11:23 +0000 (0:00:01.330) 0:09:56.322 ****** 2025-09-19 07:12:42.450812 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.450817 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.450822 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.450826 | orchestrator | 2025-09-19 07:12:42.450831 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-19 07:12:42.450836 | orchestrator | Friday 19 September 2025 07:11:24 +0000 (0:00:00.956) 0:09:57.278 ****** 2025-09-19 07:12:42.450840 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.450869 | orchestrator | 2025-09-19 07:12:42.450875 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-19 07:12:42.450879 | orchestrator | Friday 19 September 2025 07:11:24 +0000 (0:00:00.518) 0:09:57.796 ****** 2025-09-19 07:12:42.450884 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.450889 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.450894 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.450899 | orchestrator | 2025-09-19 07:12:42.450903 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-19 07:12:42.450908 | orchestrator | Friday 19 September 2025 07:11:24 +0000 (0:00:00.312) 0:09:58.109 ****** 2025-09-19 07:12:42.450913 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.450918 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.450923 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.450927 | orchestrator | 2025-09-19 07:12:42.450932 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-19 07:12:42.450937 | orchestrator | Friday 19 September 2025 07:11:26 +0000 (0:00:01.497) 0:09:59.606 ****** 2025-09-19 07:12:42.450942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:42.450947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:42.450951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:42.450956 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.450961 | orchestrator | 2025-09-19 07:12:42.450966 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-19 07:12:42.450971 | orchestrator | Friday 19 September 2025 07:11:27 +0000 (0:00:00.627) 0:10:00.233 ****** 2025-09-19 07:12:42.450975 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.450984 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.450989 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.450994 | orchestrator | 2025-09-19 07:12:42.450998 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-19 07:12:42.451003 | orchestrator | 2025-09-19 07:12:42.451008 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 07:12:42.451013 | orchestrator | Friday 19 September 2025 07:11:27 +0000 (0:00:00.547) 0:10:00.781 ****** 2025-09-19 07:12:42.451018 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.451023 | orchestrator | 2025-09-19 07:12:42.451027 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 07:12:42.451032 | orchestrator | Friday 19 September 2025 07:11:28 +0000 (0:00:00.761) 0:10:01.543 ****** 2025-09-19 07:12:42.451037 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.451042 | orchestrator | 2025-09-19 07:12:42.451049 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 07:12:42.451054 | orchestrator | Friday 19 September 2025 07:11:28 +0000 (0:00:00.518) 0:10:02.061 ****** 2025-09-19 07:12:42.451058 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.451063 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.451068 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.451073 | orchestrator | 2025-09-19 07:12:42.451077 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 07:12:42.451082 | orchestrator | Friday 19 September 2025 07:11:29 +0000 (0:00:00.539) 0:10:02.600 ****** 2025-09-19 07:12:42.451087 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.451092 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.451096 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.451101 | orchestrator | 2025-09-19 07:12:42.451106 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 07:12:42.451111 | orchestrator | Friday 19 September 2025 07:11:30 +0000 (0:00:00.699) 0:10:03.300 ****** 2025-09-19 07:12:42.451115 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.451120 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.451125 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.451130 | orchestrator | 2025-09-19 07:12:42.451135 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 07:12:42.451139 | orchestrator | Friday 19 September 2025 07:11:30 +0000 (0:00:00.751) 0:10:04.051 ****** 2025-09-19 07:12:42.451143 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.451148 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.451152 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.451156 | orchestrator | 2025-09-19 07:12:42.451161 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 07:12:42.451165 | orchestrator | Friday 19 September 2025 07:11:31 +0000 (0:00:00.721) 0:10:04.773 ****** 2025-09-19 07:12:42.451170 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.451174 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.451178 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.451183 | orchestrator | 2025-09-19 07:12:42.451187 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 07:12:42.451194 | orchestrator | Friday 19 September 2025 07:11:32 +0000 (0:00:00.615) 0:10:05.388 ****** 2025-09-19 07:12:42.451199 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.451203 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.451207 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.451212 | orchestrator | 2025-09-19 07:12:42.451216 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 07:12:42.451220 | orchestrator | Friday 19 September 2025 07:11:32 +0000 (0:00:00.365) 0:10:05.754 ****** 2025-09-19 07:12:42.451224 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.451229 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.451236 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.451240 | orchestrator | 2025-09-19 07:12:42.451244 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 07:12:42.451248 | orchestrator | Friday 19 September 2025 07:11:32 +0000 (0:00:00.311) 0:10:06.065 ****** 2025-09-19 07:12:42.451253 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.451257 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.451261 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.451266 | orchestrator | 2025-09-19 07:12:42.451270 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 07:12:42.451274 | orchestrator | Friday 19 September 2025 07:11:33 +0000 (0:00:00.753) 0:10:06.819 ****** 2025-09-19 07:12:42.451279 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.451283 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.451287 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.451291 | orchestrator | 2025-09-19 07:12:42.451296 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 07:12:42.451300 | orchestrator | Friday 19 September 2025 07:11:34 +0000 (0:00:01.021) 0:10:07.840 ****** 2025-09-19 07:12:42.451304 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.451309 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.451313 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.451317 | orchestrator | 2025-09-19 07:12:42.451322 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 07:12:42.451326 | orchestrator | Friday 19 September 2025 07:11:35 +0000 (0:00:00.377) 0:10:08.218 ****** 2025-09-19 07:12:42.451330 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.451335 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.451339 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.451343 | orchestrator | 2025-09-19 07:12:42.451347 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 07:12:42.451352 | orchestrator | Friday 19 September 2025 07:11:35 +0000 (0:00:00.363) 0:10:08.581 ****** 2025-09-19 07:12:42.451356 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.451360 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.451365 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.451369 | orchestrator | 2025-09-19 07:12:42.451373 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 07:12:42.451378 | orchestrator | Friday 19 September 2025 07:11:35 +0000 (0:00:00.407) 0:10:08.989 ****** 2025-09-19 07:12:42.451382 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.451386 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.451390 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.451395 | orchestrator | 2025-09-19 07:12:42.451399 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 07:12:42.451403 | orchestrator | Friday 19 September 2025 07:11:36 +0000 (0:00:00.592) 0:10:09.582 ****** 2025-09-19 07:12:42.451408 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.451412 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.451416 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.451420 | orchestrator | 2025-09-19 07:12:42.451425 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 07:12:42.451429 | orchestrator | Friday 19 September 2025 07:11:36 +0000 (0:00:00.376) 0:10:09.958 ****** 2025-09-19 07:12:42.451434 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.451438 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.451442 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.451447 | orchestrator | 2025-09-19 07:12:42.451451 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 07:12:42.451455 | orchestrator | Friday 19 September 2025 07:11:37 +0000 (0:00:00.317) 0:10:10.276 ****** 2025-09-19 07:12:42.451462 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.451466 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.451470 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.451475 | orchestrator | 2025-09-19 07:12:42.451481 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 07:12:42.451486 | orchestrator | Friday 19 September 2025 07:11:37 +0000 (0:00:00.328) 0:10:10.604 ****** 2025-09-19 07:12:42.451490 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.451494 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.451499 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.451503 | orchestrator | 2025-09-19 07:12:42.451507 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 07:12:42.451511 | orchestrator | Friday 19 September 2025 07:11:37 +0000 (0:00:00.578) 0:10:11.183 ****** 2025-09-19 07:12:42.451516 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.451520 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.451524 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.451528 | orchestrator | 2025-09-19 07:12:42.451533 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 07:12:42.451537 | orchestrator | Friday 19 September 2025 07:11:38 +0000 (0:00:00.335) 0:10:11.519 ****** 2025-09-19 07:12:42.451541 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.451546 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.451550 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.451554 | orchestrator | 2025-09-19 07:12:42.451559 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-19 07:12:42.451563 | orchestrator | Friday 19 September 2025 07:11:38 +0000 (0:00:00.550) 0:10:12.070 ****** 2025-09-19 07:12:42.451567 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.451572 | orchestrator | 2025-09-19 07:12:42.451576 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-19 07:12:42.451580 | orchestrator | Friday 19 September 2025 07:11:39 +0000 (0:00:00.826) 0:10:12.896 ****** 2025-09-19 07:12:42.451587 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:42.451592 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 07:12:42.451596 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 07:12:42.451600 | orchestrator | 2025-09-19 07:12:42.451605 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-19 07:12:42.451609 | orchestrator | Friday 19 September 2025 07:11:41 +0000 (0:00:02.159) 0:10:15.055 ****** 2025-09-19 07:12:42.451613 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 07:12:42.451618 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 07:12:42.451622 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.451626 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 07:12:42.451630 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 07:12:42.451635 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.451639 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 07:12:42.451643 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 07:12:42.451648 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.451652 | orchestrator | 2025-09-19 07:12:42.451656 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-19 07:12:42.451661 | orchestrator | Friday 19 September 2025 07:11:43 +0000 (0:00:01.255) 0:10:16.311 ****** 2025-09-19 07:12:42.451665 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.451669 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.451674 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.451678 | orchestrator | 2025-09-19 07:12:42.451682 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-19 07:12:42.451687 | orchestrator | Friday 19 September 2025 07:11:43 +0000 (0:00:00.348) 0:10:16.660 ****** 2025-09-19 07:12:42.451691 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.451695 | orchestrator | 2025-09-19 07:12:42.451699 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-19 07:12:42.451713 | orchestrator | Friday 19 September 2025 07:11:44 +0000 (0:00:00.958) 0:10:17.618 ****** 2025-09-19 07:12:42.451718 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.451722 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.451727 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.451731 | orchestrator | 2025-09-19 07:12:42.451735 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-19 07:12:42.451740 | orchestrator | Friday 19 September 2025 07:11:45 +0000 (0:00:00.870) 0:10:18.488 ****** 2025-09-19 07:12:42.451744 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:42.451748 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:42.451753 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 07:12:42.451757 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 07:12:42.451762 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:42.451768 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 07:12:42.451772 | orchestrator | 2025-09-19 07:12:42.451776 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-19 07:12:42.451781 | orchestrator | Friday 19 September 2025 07:11:49 +0000 (0:00:04.628) 0:10:23.116 ****** 2025-09-19 07:12:42.451785 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:42.451789 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 07:12:42.451793 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:42.451798 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 07:12:42.451802 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:12:42.451806 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 07:12:42.451811 | orchestrator | 2025-09-19 07:12:42.451815 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-19 07:12:42.451819 | orchestrator | Friday 19 September 2025 07:11:53 +0000 (0:00:03.241) 0:10:26.358 ****** 2025-09-19 07:12:42.451824 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 07:12:42.451828 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.451832 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 07:12:42.451837 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.451841 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 07:12:42.451852 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.451856 | orchestrator | 2025-09-19 07:12:42.451860 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-19 07:12:42.451864 | orchestrator | Friday 19 September 2025 07:11:54 +0000 (0:00:01.246) 0:10:27.604 ****** 2025-09-19 07:12:42.451872 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-19 07:12:42.451876 | orchestrator | 2025-09-19 07:12:42.451880 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-19 07:12:42.451884 | orchestrator | Friday 19 September 2025 07:11:54 +0000 (0:00:00.287) 0:10:27.892 ****** 2025-09-19 07:12:42.451889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:12:42.451896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:12:42.451900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:12:42.451904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:12:42.451908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:12:42.451913 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.451917 | orchestrator | 2025-09-19 07:12:42.451921 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-19 07:12:42.451925 | orchestrator | Friday 19 September 2025 07:11:55 +0000 (0:00:00.608) 0:10:28.501 ****** 2025-09-19 07:12:42.451929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:12:42.451934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:12:42.451938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:12:42.451942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:12:42.451946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 07:12:42.451951 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.451955 | orchestrator | 2025-09-19 07:12:42.451959 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-19 07:12:42.451963 | orchestrator | Friday 19 September 2025 07:11:55 +0000 (0:00:00.671) 0:10:29.173 ****** 2025-09-19 07:12:42.451968 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 07:12:42.451972 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 07:12:42.451976 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 07:12:42.451981 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 07:12:42.451987 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 07:12:42.451991 | orchestrator | 2025-09-19 07:12:42.451995 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-19 07:12:42.452000 | orchestrator | Friday 19 September 2025 07:12:27 +0000 (0:00:31.256) 0:11:00.429 ****** 2025-09-19 07:12:42.452004 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.452008 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.452012 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.452016 | orchestrator | 2025-09-19 07:12:42.452021 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-19 07:12:42.452025 | orchestrator | Friday 19 September 2025 07:12:27 +0000 (0:00:00.309) 0:11:00.738 ****** 2025-09-19 07:12:42.452029 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.452034 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.452040 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.452045 | orchestrator | 2025-09-19 07:12:42.452049 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-19 07:12:42.452053 | orchestrator | Friday 19 September 2025 07:12:28 +0000 (0:00:00.567) 0:11:01.306 ****** 2025-09-19 07:12:42.452057 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.452062 | orchestrator | 2025-09-19 07:12:42.452066 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-19 07:12:42.452070 | orchestrator | Friday 19 September 2025 07:12:28 +0000 (0:00:00.562) 0:11:01.868 ****** 2025-09-19 07:12:42.452074 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.452079 | orchestrator | 2025-09-19 07:12:42.452083 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-19 07:12:42.452087 | orchestrator | Friday 19 September 2025 07:12:29 +0000 (0:00:00.763) 0:11:02.632 ****** 2025-09-19 07:12:42.452094 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.452099 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.452103 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.452107 | orchestrator | 2025-09-19 07:12:42.452112 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-19 07:12:42.452116 | orchestrator | Friday 19 September 2025 07:12:30 +0000 (0:00:01.255) 0:11:03.887 ****** 2025-09-19 07:12:42.452120 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.452125 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.452129 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.452133 | orchestrator | 2025-09-19 07:12:42.452137 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-19 07:12:42.452142 | orchestrator | Friday 19 September 2025 07:12:31 +0000 (0:00:01.205) 0:11:05.093 ****** 2025-09-19 07:12:42.452146 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:12:42.452150 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:12:42.452154 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:12:42.452159 | orchestrator | 2025-09-19 07:12:42.452163 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-19 07:12:42.452167 | orchestrator | Friday 19 September 2025 07:12:33 +0000 (0:00:01.681) 0:11:06.774 ****** 2025-09-19 07:12:42.452172 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.452176 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.452180 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 07:12:42.452185 | orchestrator | 2025-09-19 07:12:42.452189 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 07:12:42.452193 | orchestrator | Friday 19 September 2025 07:12:36 +0000 (0:00:02.695) 0:11:09.469 ****** 2025-09-19 07:12:42.452198 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.452202 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.452206 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.452211 | orchestrator | 2025-09-19 07:12:42.452215 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-19 07:12:42.452219 | orchestrator | Friday 19 September 2025 07:12:36 +0000 (0:00:00.347) 0:11:09.817 ****** 2025-09-19 07:12:42.452223 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:12:42.452228 | orchestrator | 2025-09-19 07:12:42.452232 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-19 07:12:42.452236 | orchestrator | Friday 19 September 2025 07:12:37 +0000 (0:00:00.837) 0:11:10.654 ****** 2025-09-19 07:12:42.452241 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.452248 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.452253 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.452257 | orchestrator | 2025-09-19 07:12:42.452261 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-19 07:12:42.452266 | orchestrator | Friday 19 September 2025 07:12:37 +0000 (0:00:00.325) 0:11:10.980 ****** 2025-09-19 07:12:42.452270 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.452274 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:12:42.452278 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:12:42.452283 | orchestrator | 2025-09-19 07:12:42.452287 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-19 07:12:42.452291 | orchestrator | Friday 19 September 2025 07:12:38 +0000 (0:00:00.340) 0:11:11.321 ****** 2025-09-19 07:12:42.452296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:12:42.452300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:12:42.452304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:12:42.452311 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:12:42.452316 | orchestrator | 2025-09-19 07:12:42.452320 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-19 07:12:42.452325 | orchestrator | Friday 19 September 2025 07:12:39 +0000 (0:00:01.158) 0:11:12.479 ****** 2025-09-19 07:12:42.452329 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:12:42.452333 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:12:42.452337 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:12:42.452342 | orchestrator | 2025-09-19 07:12:42.452346 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:12:42.452350 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-19 07:12:42.452355 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-19 07:12:42.452359 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-19 07:12:42.452364 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-19 07:12:42.452368 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-19 07:12:42.452372 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-19 07:12:42.452377 | orchestrator | 2025-09-19 07:12:42.452381 | orchestrator | 2025-09-19 07:12:42.452385 | orchestrator | 2025-09-19 07:12:42.452392 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:12:42.452397 | orchestrator | Friday 19 September 2025 07:12:39 +0000 (0:00:00.256) 0:11:12.735 ****** 2025-09-19 07:12:42.452401 | orchestrator | =============================================================================== 2025-09-19 07:12:42.452405 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 52.21s 2025-09-19 07:12:42.452409 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.74s 2025-09-19 07:12:42.452413 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.26s 2025-09-19 07:12:42.452418 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 31.19s 2025-09-19 07:12:42.452422 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.81s 2025-09-19 07:12:42.452426 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 13.97s 2025-09-19 07:12:42.452430 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.11s 2025-09-19 07:12:42.452437 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.65s 2025-09-19 07:12:42.452441 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.28s 2025-09-19 07:12:42.452446 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.39s 2025-09-19 07:12:42.452450 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.66s 2025-09-19 07:12:42.452454 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.31s 2025-09-19 07:12:42.452458 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.03s 2025-09-19 07:12:42.452463 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.63s 2025-09-19 07:12:42.452467 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.17s 2025-09-19 07:12:42.452471 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.81s 2025-09-19 07:12:42.452475 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.75s 2025-09-19 07:12:42.452479 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.74s 2025-09-19 07:12:42.452483 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.57s 2025-09-19 07:12:42.452488 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.55s 2025-09-19 07:12:42.452492 | orchestrator | 2025-09-19 07:12:42 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:42.452496 | orchestrator | 2025-09-19 07:12:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:45.476799 | orchestrator | 2025-09-19 07:12:45 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:12:45.478317 | orchestrator | 2025-09-19 07:12:45 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:45.480076 | orchestrator | 2025-09-19 07:12:45 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:45.480108 | orchestrator | 2025-09-19 07:12:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:48.528945 | orchestrator | 2025-09-19 07:12:48 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:12:48.530315 | orchestrator | 2025-09-19 07:12:48 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:48.532550 | orchestrator | 2025-09-19 07:12:48 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:48.533029 | orchestrator | 2025-09-19 07:12:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:51.581664 | orchestrator | 2025-09-19 07:12:51 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:12:51.582344 | orchestrator | 2025-09-19 07:12:51 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:51.583360 | orchestrator | 2025-09-19 07:12:51 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:51.583383 | orchestrator | 2025-09-19 07:12:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:54.625425 | orchestrator | 2025-09-19 07:12:54 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:12:54.627643 | orchestrator | 2025-09-19 07:12:54 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:54.630455 | orchestrator | 2025-09-19 07:12:54 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:54.630492 | orchestrator | 2025-09-19 07:12:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:12:57.675937 | orchestrator | 2025-09-19 07:12:57 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:12:57.676061 | orchestrator | 2025-09-19 07:12:57 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:12:57.676076 | orchestrator | 2025-09-19 07:12:57 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:12:57.676088 | orchestrator | 2025-09-19 07:12:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:00.723697 | orchestrator | 2025-09-19 07:13:00 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:00.724072 | orchestrator | 2025-09-19 07:13:00 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:13:00.725633 | orchestrator | 2025-09-19 07:13:00 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:00.725822 | orchestrator | 2025-09-19 07:13:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:03.773724 | orchestrator | 2025-09-19 07:13:03 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:03.775921 | orchestrator | 2025-09-19 07:13:03 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:13:03.777964 | orchestrator | 2025-09-19 07:13:03 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:03.778264 | orchestrator | 2025-09-19 07:13:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:06.832377 | orchestrator | 2025-09-19 07:13:06 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:06.837100 | orchestrator | 2025-09-19 07:13:06 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:13:06.838541 | orchestrator | 2025-09-19 07:13:06 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:06.838756 | orchestrator | 2025-09-19 07:13:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:09.876149 | orchestrator | 2025-09-19 07:13:09 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:09.876730 | orchestrator | 2025-09-19 07:13:09 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:13:09.879523 | orchestrator | 2025-09-19 07:13:09 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:09.879880 | orchestrator | 2025-09-19 07:13:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:12.940786 | orchestrator | 2025-09-19 07:13:12 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:12.941072 | orchestrator | 2025-09-19 07:13:12 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:13:12.941188 | orchestrator | 2025-09-19 07:13:12 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:12.941202 | orchestrator | 2025-09-19 07:13:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:16.003058 | orchestrator | 2025-09-19 07:13:16 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:16.004636 | orchestrator | 2025-09-19 07:13:16 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:13:16.007249 | orchestrator | 2025-09-19 07:13:16 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:16.007533 | orchestrator | 2025-09-19 07:13:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:19.063060 | orchestrator | 2025-09-19 07:13:19 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:19.064932 | orchestrator | 2025-09-19 07:13:19 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:13:19.066921 | orchestrator | 2025-09-19 07:13:19 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:19.067149 | orchestrator | 2025-09-19 07:13:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:22.124433 | orchestrator | 2025-09-19 07:13:22 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:22.127379 | orchestrator | 2025-09-19 07:13:22 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:13:22.129536 | orchestrator | 2025-09-19 07:13:22 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:22.129578 | orchestrator | 2025-09-19 07:13:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:25.177053 | orchestrator | 2025-09-19 07:13:25 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:25.178482 | orchestrator | 2025-09-19 07:13:25 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:13:25.180382 | orchestrator | 2025-09-19 07:13:25 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:25.180396 | orchestrator | 2025-09-19 07:13:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:28.232500 | orchestrator | 2025-09-19 07:13:28 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:28.235472 | orchestrator | 2025-09-19 07:13:28 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state STARTED 2025-09-19 07:13:28.238460 | orchestrator | 2025-09-19 07:13:28 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:28.238746 | orchestrator | 2025-09-19 07:13:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:31.283500 | orchestrator | 2025-09-19 07:13:31 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:31.286512 | orchestrator | 2025-09-19 07:13:31 | INFO  | Task 6fcebc57-3540-4d8d-8040-fc391d012339 is in state SUCCESS 2025-09-19 07:13:31.287655 | orchestrator | 2025-09-19 07:13:31.287905 | orchestrator | 2025-09-19 07:13:31.287926 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:13:31.287939 | orchestrator | 2025-09-19 07:13:31.287950 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:13:31.287962 | orchestrator | Friday 19 September 2025 07:10:28 +0000 (0:00:00.267) 0:00:00.267 ****** 2025-09-19 07:13:31.287973 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:13:31.287986 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:13:31.287996 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:13:31.288028 | orchestrator | 2025-09-19 07:13:31.288041 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:13:31.288052 | orchestrator | Friday 19 September 2025 07:10:28 +0000 (0:00:00.332) 0:00:00.599 ****** 2025-09-19 07:13:31.288063 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-19 07:13:31.288075 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-19 07:13:31.288085 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-19 07:13:31.288096 | orchestrator | 2025-09-19 07:13:31.288107 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-19 07:13:31.288118 | orchestrator | 2025-09-19 07:13:31.288129 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 07:13:31.288139 | orchestrator | Friday 19 September 2025 07:10:29 +0000 (0:00:00.467) 0:00:01.067 ****** 2025-09-19 07:13:31.288150 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:13:31.288221 | orchestrator | 2025-09-19 07:13:31.288237 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-19 07:13:31.288274 | orchestrator | Friday 19 September 2025 07:10:29 +0000 (0:00:00.530) 0:00:01.598 ****** 2025-09-19 07:13:31.288286 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 07:13:31.288296 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 07:13:31.288307 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 07:13:31.288318 | orchestrator | 2025-09-19 07:13:31.288329 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-19 07:13:31.288340 | orchestrator | Friday 19 September 2025 07:10:30 +0000 (0:00:00.710) 0:00:02.308 ****** 2025-09-19 07:13:31.288369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:13:31.288386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:13:31.288409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:13:31.288424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:13:31.288452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:13:31.288552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:13:31.288566 | orchestrator | 2025-09-19 07:13:31.288577 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 07:13:31.288589 | orchestrator | Friday 19 September 2025 07:10:32 +0000 (0:00:01.701) 0:00:04.010 ****** 2025-09-19 07:13:31.288599 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:13:31.288610 | orchestrator | 2025-09-19 07:13:31.288621 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-19 07:13:31.288632 | orchestrator | Friday 19 September 2025 07:10:32 +0000 (0:00:00.543) 0:00:04.554 ****** 2025-09-19 07:13:31.288655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:13:31.288668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:13:31.288693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:13:31.288706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:13:31.288726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:13:31.288739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:13:31.288757 | orchestrator | 2025-09-19 07:13:31.288768 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-19 07:13:31.288779 | orchestrator | Friday 19 September 2025 07:10:35 +0000 (0:00:03.121) 0:00:07.675 ****** 2025-09-19 07:13:31.288796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:13:31.288834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:13:31.288847 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:31.288859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:13:31.288880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:13:31.288899 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:31.288916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:13:31.288928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:13:31.288940 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:31.288950 | orchestrator | 2025-09-19 07:13:31.288962 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-19 07:13:31.288973 | orchestrator | Friday 19 September 2025 07:10:37 +0000 (0:00:01.267) 0:00:08.943 ****** 2025-09-19 07:13:31.288984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:13:31.289010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:13:31.289023 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:31.289039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:13:31.289052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:13:31.289063 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:31.289075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 07:13:31.289102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 07:13:31.289114 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:31.289125 | orchestrator | 2025-09-19 07:13:31.289136 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-19 07:13:31.289147 | orchestrator | Friday 19 September 2025 07:10:38 +0000 (0:00:01.256) 0:00:10.199 ****** 2025-09-19 07:13:31.289158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:13:31.289175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:13:31.289189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:13:31.289210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:13:31.289237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:13:31.289257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:13:31.289270 | orchestrator | 2025-09-19 07:13:31.289283 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-19 07:13:31.289295 | orchestrator | Friday 19 September 2025 07:10:41 +0000 (0:00:02.760) 0:00:12.960 ****** 2025-09-19 07:13:31.289308 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:31.289320 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:13:31.289331 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:13:31.289342 | orchestrator | 2025-09-19 07:13:31.289353 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-19 07:13:31.289363 | orchestrator | Friday 19 September 2025 07:10:44 +0000 (0:00:03.486) 0:00:16.447 ****** 2025-09-19 07:13:31.289374 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:31.289384 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:13:31.289395 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:13:31.289406 | orchestrator | 2025-09-19 07:13:31.289416 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-19 07:13:31.289435 | orchestrator | Friday 19 September 2025 07:10:46 +0000 (0:00:02.070) 0:00:18.518 ****** 2025-09-19 07:13:31.289447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:13:31.289466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:13:31.289478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 07:13:31.289495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:13:31.289508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:13:31.289533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 07:13:31.289546 | orchestrator | 2025-09-19 07:13:31.289557 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 07:13:31.289568 | orchestrator | Friday 19 September 2025 07:10:49 +0000 (0:00:02.400) 0:00:20.919 ****** 2025-09-19 07:13:31.289579 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:31.289590 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:31.289601 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:31.289611 | orchestrator | 2025-09-19 07:13:31.289622 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 07:13:31.289632 | orchestrator | Friday 19 September 2025 07:10:49 +0000 (0:00:00.433) 0:00:21.353 ****** 2025-09-19 07:13:31.289643 | orchestrator | 2025-09-19 07:13:31.289654 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 07:13:31.289665 | orchestrator | Friday 19 September 2025 07:10:49 +0000 (0:00:00.064) 0:00:21.417 ****** 2025-09-19 07:13:31.289675 | orchestrator | 2025-09-19 07:13:31.289686 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 07:13:31.289697 | orchestrator | Friday 19 September 2025 07:10:49 +0000 (0:00:00.061) 0:00:21.479 ****** 2025-09-19 07:13:31.289708 | orchestrator | 2025-09-19 07:13:31.289718 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-19 07:13:31.289729 | orchestrator | Friday 19 September 2025 07:10:49 +0000 (0:00:00.067) 0:00:21.546 ****** 2025-09-19 07:13:31.289739 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:31.289750 | orchestrator | 2025-09-19 07:13:31.289776 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-19 07:13:31.289798 | orchestrator | Friday 19 September 2025 07:10:50 +0000 (0:00:00.209) 0:00:21.756 ****** 2025-09-19 07:13:31.289830 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:31.289841 | orchestrator | 2025-09-19 07:13:31.289852 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-19 07:13:31.289863 | orchestrator | Friday 19 September 2025 07:10:50 +0000 (0:00:00.606) 0:00:22.362 ****** 2025-09-19 07:13:31.289874 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:31.289884 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:13:31.289902 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:13:31.289913 | orchestrator | 2025-09-19 07:13:31.289924 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-19 07:13:31.289934 | orchestrator | Friday 19 September 2025 07:11:53 +0000 (0:01:02.652) 0:01:25.015 ****** 2025-09-19 07:13:31.289945 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:31.289956 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:13:31.289967 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:13:31.289978 | orchestrator | 2025-09-19 07:13:31.289989 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 07:13:31.290000 | orchestrator | Friday 19 September 2025 07:13:17 +0000 (0:01:24.249) 0:02:49.265 ****** 2025-09-19 07:13:31.290010 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:13:31.290071 | orchestrator | 2025-09-19 07:13:31.290083 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-19 07:13:31.290094 | orchestrator | Friday 19 September 2025 07:13:18 +0000 (0:00:00.580) 0:02:49.845 ****** 2025-09-19 07:13:31.290104 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:13:31.290115 | orchestrator | 2025-09-19 07:13:31.290126 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-19 07:13:31.290137 | orchestrator | Friday 19 September 2025 07:13:21 +0000 (0:00:03.066) 0:02:52.911 ****** 2025-09-19 07:13:31.290148 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:13:31.290159 | orchestrator | 2025-09-19 07:13:31.290169 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-19 07:13:31.290180 | orchestrator | Friday 19 September 2025 07:13:23 +0000 (0:00:02.389) 0:02:55.301 ****** 2025-09-19 07:13:31.290191 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:31.290201 | orchestrator | 2025-09-19 07:13:31.290212 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-19 07:13:31.290223 | orchestrator | Friday 19 September 2025 07:13:26 +0000 (0:00:02.946) 0:02:58.248 ****** 2025-09-19 07:13:31.290234 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:31.290244 | orchestrator | 2025-09-19 07:13:31.290255 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:13:31.290267 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:13:31.290280 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 07:13:31.290291 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 07:13:31.290302 | orchestrator | 2025-09-19 07:13:31.290312 | orchestrator | 2025-09-19 07:13:31.290323 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:13:31.290341 | orchestrator | Friday 19 September 2025 07:13:29 +0000 (0:00:02.609) 0:03:00.858 ****** 2025-09-19 07:13:31.290352 | orchestrator | =============================================================================== 2025-09-19 07:13:31.290363 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 84.25s 2025-09-19 07:13:31.290374 | orchestrator | opensearch : Restart opensearch container ------------------------------ 62.65s 2025-09-19 07:13:31.290384 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.49s 2025-09-19 07:13:31.290395 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.12s 2025-09-19 07:13:31.290406 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.07s 2025-09-19 07:13:31.290417 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.95s 2025-09-19 07:13:31.290427 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.76s 2025-09-19 07:13:31.290438 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.61s 2025-09-19 07:13:31.290455 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.40s 2025-09-19 07:13:31.290466 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.39s 2025-09-19 07:13:31.290477 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.07s 2025-09-19 07:13:31.290488 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.70s 2025-09-19 07:13:31.290498 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.27s 2025-09-19 07:13:31.290509 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.26s 2025-09-19 07:13:31.290520 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.71s 2025-09-19 07:13:31.290530 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.61s 2025-09-19 07:13:31.290541 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.58s 2025-09-19 07:13:31.290552 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-09-19 07:13:31.290562 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-09-19 07:13:31.290573 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-09-19 07:13:31.290589 | orchestrator | 2025-09-19 07:13:31 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:31.290600 | orchestrator | 2025-09-19 07:13:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:34.328481 | orchestrator | 2025-09-19 07:13:34 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:34.329617 | orchestrator | 2025-09-19 07:13:34 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:34.330244 | orchestrator | 2025-09-19 07:13:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:37.369140 | orchestrator | 2025-09-19 07:13:37 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:37.371181 | orchestrator | 2025-09-19 07:13:37 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:37.371277 | orchestrator | 2025-09-19 07:13:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:40.412231 | orchestrator | 2025-09-19 07:13:40 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:40.413644 | orchestrator | 2025-09-19 07:13:40 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:40.413754 | orchestrator | 2025-09-19 07:13:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:43.462918 | orchestrator | 2025-09-19 07:13:43 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:43.465109 | orchestrator | 2025-09-19 07:13:43 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:43.465699 | orchestrator | 2025-09-19 07:13:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:46.502740 | orchestrator | 2025-09-19 07:13:46 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:46.504661 | orchestrator | 2025-09-19 07:13:46 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state STARTED 2025-09-19 07:13:46.504706 | orchestrator | 2025-09-19 07:13:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:49.553694 | orchestrator | 2025-09-19 07:13:49 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:49.558410 | orchestrator | 2025-09-19 07:13:49 | INFO  | Task 0fb49335-9c29-4480-930a-029d5243e06d is in state SUCCESS 2025-09-19 07:13:49.560114 | orchestrator | 2025-09-19 07:13:49.560153 | orchestrator | 2025-09-19 07:13:49.560166 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-19 07:13:49.560178 | orchestrator | 2025-09-19 07:13:49.560190 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-19 07:13:49.560201 | orchestrator | Friday 19 September 2025 07:10:28 +0000 (0:00:00.153) 0:00:00.153 ****** 2025-09-19 07:13:49.560212 | orchestrator | ok: [localhost] => { 2025-09-19 07:13:49.560224 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-19 07:13:49.560235 | orchestrator | } 2025-09-19 07:13:49.560246 | orchestrator | 2025-09-19 07:13:49.560257 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-19 07:13:49.560268 | orchestrator | Friday 19 September 2025 07:10:28 +0000 (0:00:00.046) 0:00:00.200 ****** 2025-09-19 07:13:49.560279 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-19 07:13:49.560291 | orchestrator | ...ignoring 2025-09-19 07:13:49.560303 | orchestrator | 2025-09-19 07:13:49.560556 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-19 07:13:49.560575 | orchestrator | Friday 19 September 2025 07:10:31 +0000 (0:00:02.852) 0:00:03.053 ****** 2025-09-19 07:13:49.560586 | orchestrator | skipping: [localhost] 2025-09-19 07:13:49.560597 | orchestrator | 2025-09-19 07:13:49.560608 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-19 07:13:49.560619 | orchestrator | Friday 19 September 2025 07:10:31 +0000 (0:00:00.062) 0:00:03.115 ****** 2025-09-19 07:13:49.560630 | orchestrator | ok: [localhost] 2025-09-19 07:13:49.560641 | orchestrator | 2025-09-19 07:13:49.560653 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:13:49.560664 | orchestrator | 2025-09-19 07:13:49.560674 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:13:49.560685 | orchestrator | Friday 19 September 2025 07:10:31 +0000 (0:00:00.159) 0:00:03.275 ****** 2025-09-19 07:13:49.560696 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:13:49.560707 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:13:49.560718 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:13:49.560728 | orchestrator | 2025-09-19 07:13:49.560739 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:13:49.560750 | orchestrator | Friday 19 September 2025 07:10:32 +0000 (0:00:00.331) 0:00:03.606 ****** 2025-09-19 07:13:49.560760 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-19 07:13:49.560772 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-19 07:13:49.560782 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-19 07:13:49.560819 | orchestrator | 2025-09-19 07:13:49.560830 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-19 07:13:49.560841 | orchestrator | 2025-09-19 07:13:49.560852 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-19 07:13:49.560880 | orchestrator | Friday 19 September 2025 07:10:32 +0000 (0:00:00.605) 0:00:04.211 ****** 2025-09-19 07:13:49.560891 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 07:13:49.560903 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 07:13:49.560913 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 07:13:49.560924 | orchestrator | 2025-09-19 07:13:49.560935 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 07:13:49.560946 | orchestrator | Friday 19 September 2025 07:10:33 +0000 (0:00:00.459) 0:00:04.670 ****** 2025-09-19 07:13:49.560957 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:13:49.560969 | orchestrator | 2025-09-19 07:13:49.560980 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-19 07:13:49.560990 | orchestrator | Friday 19 September 2025 07:10:33 +0000 (0:00:00.702) 0:00:05.373 ****** 2025-09-19 07:13:49.561036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:13:49.561059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:13:49.561073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:13:49.561092 | orchestrator | 2025-09-19 07:13:49.561112 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-19 07:13:49.561124 | orchestrator | Friday 19 September 2025 07:10:37 +0000 (0:00:03.737) 0:00:09.110 ****** 2025-09-19 07:13:49.561135 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.561147 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.561158 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:49.561168 | orchestrator | 2025-09-19 07:13:49.561179 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-19 07:13:49.561191 | orchestrator | Friday 19 September 2025 07:10:38 +0000 (0:00:00.840) 0:00:09.951 ****** 2025-09-19 07:13:49.561203 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.561215 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.561227 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:49.561239 | orchestrator | 2025-09-19 07:13:49.561252 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-19 07:13:49.561264 | orchestrator | Friday 19 September 2025 07:10:40 +0000 (0:00:01.786) 0:00:11.738 ****** 2025-09-19 07:13:49.561283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:13:49.561313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:13:49.561333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:13:49.561356 | orchestrator | 2025-09-19 07:13:49.561369 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-19 07:13:49.561381 | orchestrator | Friday 19 September 2025 07:10:44 +0000 (0:00:04.517) 0:00:16.255 ****** 2025-09-19 07:13:49.561394 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.561406 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.561419 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:49.561431 | orchestrator | 2025-09-19 07:13:49.561443 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-19 07:13:49.561455 | orchestrator | Friday 19 September 2025 07:10:45 +0000 (0:00:01.207) 0:00:17.463 ****** 2025-09-19 07:13:49.561467 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:13:49.561479 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:49.561491 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:13:49.561503 | orchestrator | 2025-09-19 07:13:49.561516 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 07:13:49.561528 | orchestrator | Friday 19 September 2025 07:10:50 +0000 (0:00:04.778) 0:00:22.241 ****** 2025-09-19 07:13:49.561540 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:13:49.561552 | orchestrator | 2025-09-19 07:13:49.561563 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-19 07:13:49.561574 | orchestrator | Friday 19 September 2025 07:10:51 +0000 (0:00:00.594) 0:00:22.836 ****** 2025-09-19 07:13:49.561594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:13:49.561606 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.561623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:13:49.561644 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:49.561663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:13:49.561676 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.561687 | orchestrator | 2025-09-19 07:13:49.561697 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-19 07:13:49.561708 | orchestrator | Friday 19 September 2025 07:10:54 +0000 (0:00:03.020) 0:00:25.857 ****** 2025-09-19 07:13:49.561724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:13:49.561743 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.561760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:13:49.561772 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.561784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:13:49.561867 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:49.561878 | orchestrator | 2025-09-19 07:13:49.561889 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-19 07:13:49.561900 | orchestrator | Friday 19 September 2025 07:10:58 +0000 (0:00:04.114) 0:00:29.972 ****** 2025-09-19 07:13:49.561912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:13:49.561924 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.561944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:13:49.561963 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.561979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 07:13:49.561991 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:49.562002 | orchestrator | 2025-09-19 07:13:49.562063 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-19 07:13:49.562078 | orchestrator | Friday 19 September 2025 07:11:02 +0000 (0:00:03.911) 0:00:33.883 ****** 2025-09-19 07:13:49.562100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:13:49.562127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:13:49.562148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 07:13:49.562168 | orchestrator | 2025-09-19 07:13:49.562179 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-19 07:13:49.562190 | orchestrator | Friday 19 September 2025 07:11:05 +0000 (0:00:03.351) 0:00:37.234 ****** 2025-09-19 07:13:49.562201 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:49.562212 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:13:49.562222 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:13:49.562233 | orchestrator | 2025-09-19 07:13:49.562244 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-19 07:13:49.562254 | orchestrator | Friday 19 September 2025 07:11:06 +0000 (0:00:00.858) 0:00:38.093 ****** 2025-09-19 07:13:49.562265 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:13:49.562276 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:13:49.562287 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:13:49.562297 | orchestrator | 2025-09-19 07:13:49.562308 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-19 07:13:49.562319 | orchestrator | Friday 19 September 2025 07:11:07 +0000 (0:00:00.554) 0:00:38.647 ****** 2025-09-19 07:13:49.562330 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:13:49.562340 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:13:49.562351 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:13:49.562362 | orchestrator | 2025-09-19 07:13:49.562373 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-19 07:13:49.562389 | orchestrator | Friday 19 September 2025 07:11:07 +0000 (0:00:00.373) 0:00:39.021 ****** 2025-09-19 07:13:49.562402 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-19 07:13:49.562413 | orchestrator | ...ignoring 2025-09-19 07:13:49.562424 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-19 07:13:49.562435 | orchestrator | ...ignoring 2025-09-19 07:13:49.562446 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-19 07:13:49.562457 | orchestrator | ...ignoring 2025-09-19 07:13:49.562467 | orchestrator | 2025-09-19 07:13:49.562478 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-19 07:13:49.562489 | orchestrator | Friday 19 September 2025 07:11:18 +0000 (0:00:10.899) 0:00:49.920 ****** 2025-09-19 07:13:49.562500 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:13:49.562510 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:13:49.562521 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:13:49.562531 | orchestrator | 2025-09-19 07:13:49.562542 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-19 07:13:49.562553 | orchestrator | Friday 19 September 2025 07:11:18 +0000 (0:00:00.397) 0:00:50.317 ****** 2025-09-19 07:13:49.562563 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:49.562574 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.562585 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.562596 | orchestrator | 2025-09-19 07:13:49.562606 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-19 07:13:49.562617 | orchestrator | Friday 19 September 2025 07:11:19 +0000 (0:00:00.653) 0:00:50.971 ****** 2025-09-19 07:13:49.562628 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:49.562638 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.562649 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.562666 | orchestrator | 2025-09-19 07:13:49.562676 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-19 07:13:49.562687 | orchestrator | Friday 19 September 2025 07:11:19 +0000 (0:00:00.447) 0:00:51.418 ****** 2025-09-19 07:13:49.562698 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:49.562708 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.562719 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.562730 | orchestrator | 2025-09-19 07:13:49.562740 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-19 07:13:49.562751 | orchestrator | Friday 19 September 2025 07:11:20 +0000 (0:00:00.443) 0:00:51.862 ****** 2025-09-19 07:13:49.562761 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:13:49.562772 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:13:49.562782 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:13:49.562811 | orchestrator | 2025-09-19 07:13:49.562822 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-19 07:13:49.562833 | orchestrator | Friday 19 September 2025 07:11:20 +0000 (0:00:00.407) 0:00:52.269 ****** 2025-09-19 07:13:49.562850 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:49.562862 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.562873 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.562883 | orchestrator | 2025-09-19 07:13:49.562894 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 07:13:49.562905 | orchestrator | Friday 19 September 2025 07:11:21 +0000 (0:00:00.877) 0:00:53.147 ****** 2025-09-19 07:13:49.562915 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.562926 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.562937 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-19 07:13:49.562947 | orchestrator | 2025-09-19 07:13:49.562958 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-19 07:13:49.562969 | orchestrator | Friday 19 September 2025 07:11:22 +0000 (0:00:00.402) 0:00:53.550 ****** 2025-09-19 07:13:49.562980 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:49.562990 | orchestrator | 2025-09-19 07:13:49.563001 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-19 07:13:49.563011 | orchestrator | Friday 19 September 2025 07:11:32 +0000 (0:00:10.318) 0:01:03.868 ****** 2025-09-19 07:13:49.563022 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:13:49.563033 | orchestrator | 2025-09-19 07:13:49.563043 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 07:13:49.563054 | orchestrator | Friday 19 September 2025 07:11:32 +0000 (0:00:00.127) 0:01:03.995 ****** 2025-09-19 07:13:49.563064 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:49.563075 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.563086 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.563096 | orchestrator | 2025-09-19 07:13:49.563107 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-19 07:13:49.563118 | orchestrator | Friday 19 September 2025 07:11:33 +0000 (0:00:01.134) 0:01:05.129 ****** 2025-09-19 07:13:49.563128 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:49.563139 | orchestrator | 2025-09-19 07:13:49.563150 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-19 07:13:49.563160 | orchestrator | Friday 19 September 2025 07:11:41 +0000 (0:00:08.209) 0:01:13.339 ****** 2025-09-19 07:13:49.563171 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:13:49.563181 | orchestrator | 2025-09-19 07:13:49.563194 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-19 07:13:49.563212 | orchestrator | Friday 19 September 2025 07:11:43 +0000 (0:00:01.677) 0:01:15.017 ****** 2025-09-19 07:13:49.563229 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:13:49.563248 | orchestrator | 2025-09-19 07:13:49.563264 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-19 07:13:49.563283 | orchestrator | Friday 19 September 2025 07:11:46 +0000 (0:00:02.946) 0:01:17.964 ****** 2025-09-19 07:13:49.563315 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:49.563333 | orchestrator | 2025-09-19 07:13:49.563351 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-19 07:13:49.563369 | orchestrator | Friday 19 September 2025 07:11:46 +0000 (0:00:00.138) 0:01:18.102 ****** 2025-09-19 07:13:49.563380 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:49.563391 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.563402 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.563412 | orchestrator | 2025-09-19 07:13:49.563430 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-19 07:13:49.563447 | orchestrator | Friday 19 September 2025 07:11:46 +0000 (0:00:00.334) 0:01:18.437 ****** 2025-09-19 07:13:49.563464 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:49.563480 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-19 07:13:49.563496 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:13:49.563511 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:13:49.563527 | orchestrator | 2025-09-19 07:13:49.563546 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-19 07:13:49.563565 | orchestrator | skipping: no hosts matched 2025-09-19 07:13:49.563584 | orchestrator | 2025-09-19 07:13:49.563596 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 07:13:49.563607 | orchestrator | 2025-09-19 07:13:49.563617 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 07:13:49.563628 | orchestrator | Friday 19 September 2025 07:11:47 +0000 (0:00:00.603) 0:01:19.040 ****** 2025-09-19 07:13:49.563639 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:13:49.563650 | orchestrator | 2025-09-19 07:13:49.563660 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 07:13:49.563671 | orchestrator | Friday 19 September 2025 07:12:07 +0000 (0:00:20.351) 0:01:39.391 ****** 2025-09-19 07:13:49.563682 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:13:49.563693 | orchestrator | 2025-09-19 07:13:49.563703 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 07:13:49.563714 | orchestrator | Friday 19 September 2025 07:12:28 +0000 (0:00:20.581) 0:01:59.973 ****** 2025-09-19 07:13:49.563725 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:13:49.563736 | orchestrator | 2025-09-19 07:13:49.563746 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 07:13:49.563757 | orchestrator | 2025-09-19 07:13:49.563768 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 07:13:49.563778 | orchestrator | Friday 19 September 2025 07:12:30 +0000 (0:00:02.398) 0:02:02.372 ****** 2025-09-19 07:13:49.563809 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:13:49.563821 | orchestrator | 2025-09-19 07:13:49.563832 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 07:13:49.563842 | orchestrator | Friday 19 September 2025 07:12:50 +0000 (0:00:19.843) 0:02:22.215 ****** 2025-09-19 07:13:49.563853 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:13:49.563864 | orchestrator | 2025-09-19 07:13:49.563874 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 07:13:49.563885 | orchestrator | Friday 19 September 2025 07:13:11 +0000 (0:00:20.685) 0:02:42.901 ****** 2025-09-19 07:13:49.563896 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:13:49.563906 | orchestrator | 2025-09-19 07:13:49.563917 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-19 07:13:49.563927 | orchestrator | 2025-09-19 07:13:49.563946 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 07:13:49.563957 | orchestrator | Friday 19 September 2025 07:13:14 +0000 (0:00:02.657) 0:02:45.559 ****** 2025-09-19 07:13:49.563968 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:49.563979 | orchestrator | 2025-09-19 07:13:49.563990 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 07:13:49.564001 | orchestrator | Friday 19 September 2025 07:13:26 +0000 (0:00:12.231) 0:02:57.790 ****** 2025-09-19 07:13:49.564020 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:13:49.564030 | orchestrator | 2025-09-19 07:13:49.564041 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 07:13:49.564052 | orchestrator | Friday 19 September 2025 07:13:31 +0000 (0:00:05.600) 0:03:03.391 ****** 2025-09-19 07:13:49.564063 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:13:49.564074 | orchestrator | 2025-09-19 07:13:49.564085 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-19 07:13:49.564095 | orchestrator | 2025-09-19 07:13:49.564106 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-19 07:13:49.564117 | orchestrator | Friday 19 September 2025 07:13:34 +0000 (0:00:02.756) 0:03:06.148 ****** 2025-09-19 07:13:49.564127 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:13:49.564138 | orchestrator | 2025-09-19 07:13:49.564149 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-19 07:13:49.564159 | orchestrator | Friday 19 September 2025 07:13:35 +0000 (0:00:00.545) 0:03:06.694 ****** 2025-09-19 07:13:49.564170 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.564181 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.564192 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:49.564202 | orchestrator | 2025-09-19 07:13:49.564213 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-19 07:13:49.564224 | orchestrator | Friday 19 September 2025 07:13:37 +0000 (0:00:02.355) 0:03:09.049 ****** 2025-09-19 07:13:49.564235 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.564245 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.564256 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:49.564267 | orchestrator | 2025-09-19 07:13:49.564277 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-19 07:13:49.564288 | orchestrator | Friday 19 September 2025 07:13:39 +0000 (0:00:02.361) 0:03:11.411 ****** 2025-09-19 07:13:49.564299 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.564310 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.564321 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:49.564331 | orchestrator | 2025-09-19 07:13:49.564342 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-19 07:13:49.564352 | orchestrator | Friday 19 September 2025 07:13:42 +0000 (0:00:02.238) 0:03:13.650 ****** 2025-09-19 07:13:49.564363 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.564374 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.564392 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:13:49.564403 | orchestrator | 2025-09-19 07:13:49.564414 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-19 07:13:49.564425 | orchestrator | Friday 19 September 2025 07:13:44 +0000 (0:00:02.313) 0:03:15.964 ****** 2025-09-19 07:13:49.564435 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:13:49.564446 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:13:49.564457 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:13:49.564468 | orchestrator | 2025-09-19 07:13:49.564479 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-19 07:13:49.564489 | orchestrator | Friday 19 September 2025 07:13:47 +0000 (0:00:03.153) 0:03:19.117 ****** 2025-09-19 07:13:49.564500 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:13:49.564511 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:13:49.564521 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:13:49.564532 | orchestrator | 2025-09-19 07:13:49.564543 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:13:49.564554 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-19 07:13:49.564565 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-19 07:13:49.564584 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-19 07:13:49.564595 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-19 07:13:49.564606 | orchestrator | 2025-09-19 07:13:49.564625 | orchestrator | 2025-09-19 07:13:49.564643 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:13:49.564663 | orchestrator | Friday 19 September 2025 07:13:48 +0000 (0:00:00.548) 0:03:19.666 ****** 2025-09-19 07:13:49.564710 | orchestrator | =============================================================================== 2025-09-19 07:13:49.564729 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.27s 2025-09-19 07:13:49.564748 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 40.19s 2025-09-19 07:13:49.564769 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.23s 2025-09-19 07:13:49.564782 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.90s 2025-09-19 07:13:49.564823 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.32s 2025-09-19 07:13:49.564843 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.21s 2025-09-19 07:13:49.564870 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.60s 2025-09-19 07:13:49.564883 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.06s 2025-09-19 07:13:49.564896 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.78s 2025-09-19 07:13:49.564914 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.52s 2025-09-19 07:13:49.564933 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 4.11s 2025-09-19 07:13:49.564950 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.91s 2025-09-19 07:13:49.564969 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.74s 2025-09-19 07:13:49.564986 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.35s 2025-09-19 07:13:49.565002 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.15s 2025-09-19 07:13:49.565021 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.02s 2025-09-19 07:13:49.565039 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.95s 2025-09-19 07:13:49.565057 | orchestrator | Check MariaDB service --------------------------------------------------- 2.85s 2025-09-19 07:13:49.565074 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.76s 2025-09-19 07:13:49.565093 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.36s 2025-09-19 07:13:49.565112 | orchestrator | 2025-09-19 07:13:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:52.617652 | orchestrator | 2025-09-19 07:13:52 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:13:52.619178 | orchestrator | 2025-09-19 07:13:52 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:52.621395 | orchestrator | 2025-09-19 07:13:52 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:13:52.621485 | orchestrator | 2025-09-19 07:13:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:55.654283 | orchestrator | 2025-09-19 07:13:55 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:13:55.654913 | orchestrator | 2025-09-19 07:13:55 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:55.656026 | orchestrator | 2025-09-19 07:13:55 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:13:55.656087 | orchestrator | 2025-09-19 07:13:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:13:58.713257 | orchestrator | 2025-09-19 07:13:58 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:13:58.714236 | orchestrator | 2025-09-19 07:13:58 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:13:58.715932 | orchestrator | 2025-09-19 07:13:58 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:13:58.715983 | orchestrator | 2025-09-19 07:13:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:01.753962 | orchestrator | 2025-09-19 07:14:01 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:01.754261 | orchestrator | 2025-09-19 07:14:01 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:01.755377 | orchestrator | 2025-09-19 07:14:01 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:01.755402 | orchestrator | 2025-09-19 07:14:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:04.801254 | orchestrator | 2025-09-19 07:14:04 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:04.802201 | orchestrator | 2025-09-19 07:14:04 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:04.807610 | orchestrator | 2025-09-19 07:14:04 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:04.807670 | orchestrator | 2025-09-19 07:14:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:07.836501 | orchestrator | 2025-09-19 07:14:07 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:07.838188 | orchestrator | 2025-09-19 07:14:07 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:07.839542 | orchestrator | 2025-09-19 07:14:07 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:07.839649 | orchestrator | 2025-09-19 07:14:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:10.877081 | orchestrator | 2025-09-19 07:14:10 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:10.877207 | orchestrator | 2025-09-19 07:14:10 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:10.877233 | orchestrator | 2025-09-19 07:14:10 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:10.877253 | orchestrator | 2025-09-19 07:14:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:13.917861 | orchestrator | 2025-09-19 07:14:13 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:13.920141 | orchestrator | 2025-09-19 07:14:13 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:13.922573 | orchestrator | 2025-09-19 07:14:13 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:13.922996 | orchestrator | 2025-09-19 07:14:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:16.961862 | orchestrator | 2025-09-19 07:14:16 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:16.961995 | orchestrator | 2025-09-19 07:14:16 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:16.962946 | orchestrator | 2025-09-19 07:14:16 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:16.963034 | orchestrator | 2025-09-19 07:14:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:19.998180 | orchestrator | 2025-09-19 07:14:19 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:19.999570 | orchestrator | 2025-09-19 07:14:19 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:20.001088 | orchestrator | 2025-09-19 07:14:20 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:20.001119 | orchestrator | 2025-09-19 07:14:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:23.051487 | orchestrator | 2025-09-19 07:14:23 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:23.053535 | orchestrator | 2025-09-19 07:14:23 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:23.055705 | orchestrator | 2025-09-19 07:14:23 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:23.055749 | orchestrator | 2025-09-19 07:14:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:26.089347 | orchestrator | 2025-09-19 07:14:26 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:26.091181 | orchestrator | 2025-09-19 07:14:26 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:26.092305 | orchestrator | 2025-09-19 07:14:26 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:26.092337 | orchestrator | 2025-09-19 07:14:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:29.127051 | orchestrator | 2025-09-19 07:14:29 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:29.128050 | orchestrator | 2025-09-19 07:14:29 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:29.128937 | orchestrator | 2025-09-19 07:14:29 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:29.128982 | orchestrator | 2025-09-19 07:14:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:32.174273 | orchestrator | 2025-09-19 07:14:32 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:32.177350 | orchestrator | 2025-09-19 07:14:32 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:32.178201 | orchestrator | 2025-09-19 07:14:32 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:32.178291 | orchestrator | 2025-09-19 07:14:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:35.215813 | orchestrator | 2025-09-19 07:14:35 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:35.218710 | orchestrator | 2025-09-19 07:14:35 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:35.220450 | orchestrator | 2025-09-19 07:14:35 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:35.220480 | orchestrator | 2025-09-19 07:14:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:38.259867 | orchestrator | 2025-09-19 07:14:38 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:38.260605 | orchestrator | 2025-09-19 07:14:38 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:38.262979 | orchestrator | 2025-09-19 07:14:38 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:38.263009 | orchestrator | 2025-09-19 07:14:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:41.307476 | orchestrator | 2025-09-19 07:14:41 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:41.307594 | orchestrator | 2025-09-19 07:14:41 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:41.308607 | orchestrator | 2025-09-19 07:14:41 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:41.308637 | orchestrator | 2025-09-19 07:14:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:44.351219 | orchestrator | 2025-09-19 07:14:44 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:44.354138 | orchestrator | 2025-09-19 07:14:44 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:44.355954 | orchestrator | 2025-09-19 07:14:44 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:44.356026 | orchestrator | 2025-09-19 07:14:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:47.397824 | orchestrator | 2025-09-19 07:14:47 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:47.399809 | orchestrator | 2025-09-19 07:14:47 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:47.401098 | orchestrator | 2025-09-19 07:14:47 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:47.401555 | orchestrator | 2025-09-19 07:14:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:50.441628 | orchestrator | 2025-09-19 07:14:50 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:50.443270 | orchestrator | 2025-09-19 07:14:50 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:50.445081 | orchestrator | 2025-09-19 07:14:50 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:50.445129 | orchestrator | 2025-09-19 07:14:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:53.491690 | orchestrator | 2025-09-19 07:14:53 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:53.493341 | orchestrator | 2025-09-19 07:14:53 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state STARTED 2025-09-19 07:14:53.495476 | orchestrator | 2025-09-19 07:14:53 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:53.495508 | orchestrator | 2025-09-19 07:14:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:56.545490 | orchestrator | 2025-09-19 07:14:56 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:56.547951 | orchestrator | 2025-09-19 07:14:56 | INFO  | Task 718ca397-3caa-4f0d-8a90-3984f0df7a2b is in state SUCCESS 2025-09-19 07:14:56.548132 | orchestrator | 2025-09-19 07:14:56.550118 | orchestrator | 2025-09-19 07:14:56.550145 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-19 07:14:56.550153 | orchestrator | 2025-09-19 07:14:56.550160 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-19 07:14:56.550169 | orchestrator | Friday 19 September 2025 07:12:43 +0000 (0:00:00.568) 0:00:00.568 ****** 2025-09-19 07:14:56.550176 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:14:56.550185 | orchestrator | 2025-09-19 07:14:56.550192 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-19 07:14:56.550199 | orchestrator | Friday 19 September 2025 07:12:44 +0000 (0:00:00.677) 0:00:01.245 ****** 2025-09-19 07:14:56.550239 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:14:56.550248 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.550276 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:14:56.550284 | orchestrator | 2025-09-19 07:14:56.550290 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-19 07:14:56.550297 | orchestrator | Friday 19 September 2025 07:12:45 +0000 (0:00:00.754) 0:00:02.000 ****** 2025-09-19 07:14:56.550304 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.550388 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:14:56.550398 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:14:56.550405 | orchestrator | 2025-09-19 07:14:56.550412 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-19 07:14:56.550418 | orchestrator | Friday 19 September 2025 07:12:45 +0000 (0:00:00.294) 0:00:02.295 ****** 2025-09-19 07:14:56.550425 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.550432 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:14:56.550439 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:14:56.550445 | orchestrator | 2025-09-19 07:14:56.550452 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-19 07:14:56.550459 | orchestrator | Friday 19 September 2025 07:12:46 +0000 (0:00:00.783) 0:00:03.078 ****** 2025-09-19 07:14:56.550465 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.550472 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:14:56.550479 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:14:56.550485 | orchestrator | 2025-09-19 07:14:56.550492 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-19 07:14:56.550499 | orchestrator | Friday 19 September 2025 07:12:46 +0000 (0:00:00.325) 0:00:03.404 ****** 2025-09-19 07:14:56.550505 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.550512 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:14:56.550519 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:14:56.550525 | orchestrator | 2025-09-19 07:14:56.550532 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-19 07:14:56.550539 | orchestrator | Friday 19 September 2025 07:12:47 +0000 (0:00:00.301) 0:00:03.706 ****** 2025-09-19 07:14:56.550545 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.550552 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:14:56.550559 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:14:56.550565 | orchestrator | 2025-09-19 07:14:56.550572 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-19 07:14:56.550579 | orchestrator | Friday 19 September 2025 07:12:47 +0000 (0:00:00.315) 0:00:04.021 ****** 2025-09-19 07:14:56.550585 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.550593 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.550826 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.550835 | orchestrator | 2025-09-19 07:14:56.550883 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-19 07:14:56.550890 | orchestrator | Friday 19 September 2025 07:12:47 +0000 (0:00:00.472) 0:00:04.494 ****** 2025-09-19 07:14:56.550897 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.550904 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:14:56.550910 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:14:56.550917 | orchestrator | 2025-09-19 07:14:56.550924 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-19 07:14:56.550931 | orchestrator | Friday 19 September 2025 07:12:48 +0000 (0:00:00.298) 0:00:04.792 ****** 2025-09-19 07:14:56.550938 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 07:14:56.550945 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:14:56.550951 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:14:56.550958 | orchestrator | 2025-09-19 07:14:56.550965 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-19 07:14:56.550971 | orchestrator | Friday 19 September 2025 07:12:48 +0000 (0:00:00.673) 0:00:05.465 ****** 2025-09-19 07:14:56.550978 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.550985 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:14:56.551005 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:14:56.551012 | orchestrator | 2025-09-19 07:14:56.551019 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-19 07:14:56.551047 | orchestrator | Friday 19 September 2025 07:12:49 +0000 (0:00:00.438) 0:00:05.904 ****** 2025-09-19 07:14:56.551066 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 07:14:56.551073 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:14:56.551080 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:14:56.551086 | orchestrator | 2025-09-19 07:14:56.551093 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-19 07:14:56.551144 | orchestrator | Friday 19 September 2025 07:12:51 +0000 (0:00:02.131) 0:00:08.036 ****** 2025-09-19 07:14:56.551192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 07:14:56.551201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 07:14:56.551208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 07:14:56.551214 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.551357 | orchestrator | 2025-09-19 07:14:56.551364 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-19 07:14:56.551393 | orchestrator | Friday 19 September 2025 07:12:51 +0000 (0:00:00.395) 0:00:08.432 ****** 2025-09-19 07:14:56.551404 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.551413 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.551420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.551427 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.551434 | orchestrator | 2025-09-19 07:14:56.551440 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-19 07:14:56.551447 | orchestrator | Friday 19 September 2025 07:12:52 +0000 (0:00:00.808) 0:00:09.241 ****** 2025-09-19 07:14:56.551456 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.551465 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.551472 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.551479 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.551492 | orchestrator | 2025-09-19 07:14:56.551498 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-19 07:14:56.551505 | orchestrator | Friday 19 September 2025 07:12:52 +0000 (0:00:00.161) 0:00:09.402 ****** 2025-09-19 07:14:56.551513 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ab71ffb15b1a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-19 07:12:49.987575', 'end': '2025-09-19 07:12:50.047227', 'delta': '0:00:00.059652', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ab71ffb15b1a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-19 07:14:56.551534 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ddd544b3d33c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-19 07:12:50.740412', 'end': '2025-09-19 07:12:50.781825', 'delta': '0:00:00.041413', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ddd544b3d33c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-19 07:14:56.551562 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '32bb4622cf2f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-19 07:12:51.281610', 'end': '2025-09-19 07:12:51.319210', 'delta': '0:00:00.037600', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['32bb4622cf2f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-19 07:14:56.551570 | orchestrator | 2025-09-19 07:14:56.551577 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-19 07:14:56.551584 | orchestrator | Friday 19 September 2025 07:12:53 +0000 (0:00:00.373) 0:00:09.776 ****** 2025-09-19 07:14:56.551591 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.551597 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:14:56.551604 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:14:56.551611 | orchestrator | 2025-09-19 07:14:56.551617 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-19 07:14:56.551624 | orchestrator | Friday 19 September 2025 07:12:53 +0000 (0:00:00.438) 0:00:10.215 ****** 2025-09-19 07:14:56.551631 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-19 07:14:56.551637 | orchestrator | 2025-09-19 07:14:56.551644 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-19 07:14:56.551651 | orchestrator | Friday 19 September 2025 07:12:55 +0000 (0:00:01.725) 0:00:11.941 ****** 2025-09-19 07:14:56.551657 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.551664 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.551671 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.551677 | orchestrator | 2025-09-19 07:14:56.551684 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-19 07:14:56.551691 | orchestrator | Friday 19 September 2025 07:12:55 +0000 (0:00:00.304) 0:00:12.245 ****** 2025-09-19 07:14:56.551697 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.551708 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.551715 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.551722 | orchestrator | 2025-09-19 07:14:56.551729 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 07:14:56.551735 | orchestrator | Friday 19 September 2025 07:12:56 +0000 (0:00:00.405) 0:00:12.651 ****** 2025-09-19 07:14:56.551757 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.551764 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.551770 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.551777 | orchestrator | 2025-09-19 07:14:56.551784 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-19 07:14:56.551791 | orchestrator | Friday 19 September 2025 07:12:56 +0000 (0:00:00.503) 0:00:13.154 ****** 2025-09-19 07:14:56.551797 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.551804 | orchestrator | 2025-09-19 07:14:56.551810 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-19 07:14:56.551817 | orchestrator | Friday 19 September 2025 07:12:56 +0000 (0:00:00.134) 0:00:13.289 ****** 2025-09-19 07:14:56.551824 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.551830 | orchestrator | 2025-09-19 07:14:56.551837 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 07:14:56.551844 | orchestrator | Friday 19 September 2025 07:12:56 +0000 (0:00:00.247) 0:00:13.537 ****** 2025-09-19 07:14:56.551850 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.551857 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.551864 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.551870 | orchestrator | 2025-09-19 07:14:56.551877 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-19 07:14:56.551884 | orchestrator | Friday 19 September 2025 07:12:57 +0000 (0:00:00.307) 0:00:13.844 ****** 2025-09-19 07:14:56.551911 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.551919 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.551926 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.551932 | orchestrator | 2025-09-19 07:14:56.551939 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-19 07:14:56.551945 | orchestrator | Friday 19 September 2025 07:12:57 +0000 (0:00:00.303) 0:00:14.148 ****** 2025-09-19 07:14:56.551952 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.551959 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.551965 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.551972 | orchestrator | 2025-09-19 07:14:56.551978 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-19 07:14:56.551989 | orchestrator | Friday 19 September 2025 07:12:58 +0000 (0:00:00.523) 0:00:14.671 ****** 2025-09-19 07:14:56.551996 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.552004 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.552012 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.552020 | orchestrator | 2025-09-19 07:14:56.552028 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-19 07:14:56.552035 | orchestrator | Friday 19 September 2025 07:12:58 +0000 (0:00:00.324) 0:00:14.996 ****** 2025-09-19 07:14:56.552043 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.552051 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.552059 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.552067 | orchestrator | 2025-09-19 07:14:56.552073 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-19 07:14:56.552080 | orchestrator | Friday 19 September 2025 07:12:58 +0000 (0:00:00.316) 0:00:15.312 ****** 2025-09-19 07:14:56.552086 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.552093 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.552100 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.552106 | orchestrator | 2025-09-19 07:14:56.552113 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-19 07:14:56.552147 | orchestrator | Friday 19 September 2025 07:12:59 +0000 (0:00:00.353) 0:00:15.666 ****** 2025-09-19 07:14:56.552155 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.552162 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.552169 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.552175 | orchestrator | 2025-09-19 07:14:56.552182 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-19 07:14:56.552189 | orchestrator | Friday 19 September 2025 07:12:59 +0000 (0:00:00.534) 0:00:16.200 ****** 2025-09-19 07:14:56.552196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--deb73447--54c2--58c6--89f8--2e63b50c59b2-osd--block--deb73447--54c2--58c6--89f8--2e63b50c59b2', 'dm-uuid-LVM-XvI1wpi0mlo2hzhwRoH4K1fschEbbhdh2e5elcYuufXf341NnOftrw9hvbPcwhQa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1-osd--block--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1', 'dm-uuid-LVM-Su90vW0BEUeQSGmwjTSwOn77M0vIvaha3sCWB7PEjm1YojP1KMlkNMNjvR6S7zpe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--05a06e17--0162--5722--bf4c--f18a4cab61c7-osd--block--05a06e17--0162--5722--bf4c--f18a4cab61c7', 'dm-uuid-LVM-9uk4YiTZadA2OsxZkkgZB77Y39lpzkYip18PAjava5s6U1lHF4Tvey4NloiLtVL2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--caff573e--485a--5d29--90dc--90eefd21fd68-osd--block--caff573e--485a--5d29--90dc--90eefd21fd68', 'dm-uuid-LVM-RxxKgkgukx9yiVevNTLc9qm1B1abF2Vhik61pg9cUadwL3fll230ZQ0WvBDc0kJ0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part1', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part14', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part15', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part16', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--deb73447--54c2--58c6--89f8--2e63b50c59b2-osd--block--deb73447--54c2--58c6--89f8--2e63b50c59b2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-x0Ho3y-Pqsq-ac3I-beEO-ZTA1-pzuy-YRapkj', 'scsi-0QEMU_QEMU_HARDDISK_4dd49722-42e6-4e94-9106-a95d5116fdb0', 'scsi-SQEMU_QEMU_HARDDISK_4dd49722-42e6-4e94-9106-a95d5116fdb0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1-osd--block--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NEFu06-MPXX-Rh7R-idEq-pJyD-oFMN-0BLXas', 'scsi-0QEMU_QEMU_HARDDISK_1cf24504-b3f3-4e87-bda4-4a150d83b5cd', 'scsi-SQEMU_QEMU_HARDDISK_1cf24504-b3f3-4e87-bda4-4a150d83b5cd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b11ce89-f193-4587-acb9-80845fc85b80', 'scsi-SQEMU_QEMU_HARDDISK_5b11ce89-f193-4587-acb9-80845fc85b80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552460 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.552475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--05a06e17--0162--5722--bf4c--f18a4cab61c7-osd--block--05a06e17--0162--5722--bf4c--f18a4cab61c7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-701MLa-Hvmr-zjJn-Mf5W-YNYL-f2gr-hokHBV', 'scsi-0QEMU_QEMU_HARDDISK_c93c054d-d324-48de-9f46-886df7842ff7', 'scsi-SQEMU_QEMU_HARDDISK_c93c054d-d324-48de-9f46-886df7842ff7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--caff573e--485a--5d29--90dc--90eefd21fd68-osd--block--caff573e--485a--5d29--90dc--90eefd21fd68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4VIc2w-sS1q-hgag-jCCK-DQqD-F8UU-9JzOKT', 'scsi-0QEMU_QEMU_HARDDISK_38f6fb83-908a-4dc2-a0dd-a3bb8d4e5dee', 'scsi-SQEMU_QEMU_HARDDISK_38f6fb83-908a-4dc2-a0dd-a3bb8d4e5dee'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b81412c7-c90d-434c-bce7-fcbaa76ae3c0', 'scsi-SQEMU_QEMU_HARDDISK_b81412c7-c90d-434c-bce7-fcbaa76ae3c0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552518 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.552535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d4db71fd--07e0--550b--b185--dcfd36a5307b-osd--block--d4db71fd--07e0--550b--b185--dcfd36a5307b', 'dm-uuid-LVM-2VuNSydCF6xDPFVEK9I5XoXP18hgexhbgpWMv9SItC0xRgBVntLtQHtX3v4TsWZz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0c5dfb3--0a46--5f65--b869--b08108365918-osd--block--a0c5dfb3--0a46--5f65--b869--b08108365918', 'dm-uuid-LVM-kfNH1NnBEHxPOM95MFR51kfI2q2qlQpVOr4q5w24oDuwPe24et2QFHL52n2BudGm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552582 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 07:14:56.552623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d4db71fd--07e0--550b--b185--dcfd36a5307b-osd--block--d4db71fd--07e0--550b--b185--dcfd36a5307b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-e9kad0-VhmD-NuZg-oCqb-z4kM-k78m-t9RP2d', 'scsi-0QEMU_QEMU_HARDDISK_3567b0e7-c22b-4a61-9c89-3afd695b5400', 'scsi-SQEMU_QEMU_HARDDISK_3567b0e7-c22b-4a61-9c89-3afd695b5400'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a0c5dfb3--0a46--5f65--b869--b08108365918-osd--block--a0c5dfb3--0a46--5f65--b869--b08108365918'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-usR8JB-9mkA-PaYY-xtc5-7Wti-Y4N5-0AfeXV', 'scsi-0QEMU_QEMU_HARDDISK_60eaf991-1ab4-4753-9c6a-a15ff08d271c', 'scsi-SQEMU_QEMU_HARDDISK_60eaf991-1ab4-4753-9c6a-a15ff08d271c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efb009a3-4323-4607-93cb-907bed8bb1e3', 'scsi-SQEMU_QEMU_HARDDISK_efb009a3-4323-4607-93cb-907bed8bb1e3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 07:14:56.552671 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.552677 | orchestrator | 2025-09-19 07:14:56.552684 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-19 07:14:56.552691 | orchestrator | Friday 19 September 2025 07:13:00 +0000 (0:00:00.569) 0:00:16.769 ****** 2025-09-19 07:14:56.552698 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--deb73447--54c2--58c6--89f8--2e63b50c59b2-osd--block--deb73447--54c2--58c6--89f8--2e63b50c59b2', 'dm-uuid-LVM-XvI1wpi0mlo2hzhwRoH4K1fschEbbhdh2e5elcYuufXf341NnOftrw9hvbPcwhQa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1-osd--block--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1', 'dm-uuid-LVM-Su90vW0BEUeQSGmwjTSwOn77M0vIvaha3sCWB7PEjm1YojP1KMlkNMNjvR6S7zpe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552714 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552724 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552734 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552759 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552767 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552774 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552792 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552802 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--05a06e17--0162--5722--bf4c--f18a4cab61c7-osd--block--05a06e17--0162--5722--bf4c--f18a4cab61c7', 'dm-uuid-LVM-9uk4YiTZadA2OsxZkkgZB77Y39lpzkYip18PAjava5s6U1lHF4Tvey4NloiLtVL2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552813 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--caff573e--485a--5d29--90dc--90eefd21fd68-osd--block--caff573e--485a--5d29--90dc--90eefd21fd68', 'dm-uuid-LVM-RxxKgkgukx9yiVevNTLc9qm1B1abF2Vhik61pg9cUadwL3fll230ZQ0WvBDc0kJ0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552821 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part1', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part14', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part15', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part16', 'scsi-SQEMU_QEMU_HARDDISK_c88b683f-dc1f-4f4c-815b-59025e141d37-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552836 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--deb73447--54c2--58c6--89f8--2e63b50c59b2-osd--block--deb73447--54c2--58c6--89f8--2e63b50c59b2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-x0Ho3y-Pqsq-ac3I-beEO-ZTA1-pzuy-YRapkj', 'scsi-0QEMU_QEMU_HARDDISK_4dd49722-42e6-4e94-9106-a95d5116fdb0', 'scsi-SQEMU_QEMU_HARDDISK_4dd49722-42e6-4e94-9106-a95d5116fdb0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552844 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1-osd--block--6d43fc0f--0470--50ff--9d43--3faecb8a0ab1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NEFu06-MPXX-Rh7R-idEq-pJyD-oFMN-0BLXas', 'scsi-0QEMU_QEMU_HARDDISK_1cf24504-b3f3-4e87-bda4-4a150d83b5cd', 'scsi-SQEMU_QEMU_HARDDISK_1cf24504-b3f3-4e87-bda4-4a150d83b5cd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552863 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552870 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b11ce89-f193-4587-acb9-80845fc85b80', 'scsi-SQEMU_QEMU_HARDDISK_5b11ce89-f193-4587-acb9-80845fc85b80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552881 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552904 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552911 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552918 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.552925 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552932 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552943 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552962 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d9a42c6-6415-42b2-9cd6-58e920cd7387-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552969 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d4db71fd--07e0--550b--b185--dcfd36a5307b-osd--block--d4db71fd--07e0--550b--b185--dcfd36a5307b', 'dm-uuid-LVM-2VuNSydCF6xDPFVEK9I5XoXP18hgexhbgpWMv9SItC0xRgBVntLtQHtX3v4TsWZz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552981 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--05a06e17--0162--5722--bf4c--f18a4cab61c7-osd--block--05a06e17--0162--5722--bf4c--f18a4cab61c7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-701MLa-Hvmr-zjJn-Mf5W-YNYL-f2gr-hokHBV', 'scsi-0QEMU_QEMU_HARDDISK_c93c054d-d324-48de-9f46-886df7842ff7', 'scsi-SQEMU_QEMU_HARDDISK_c93c054d-d324-48de-9f46-886df7842ff7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.552992 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--caff573e--485a--5d29--90dc--90eefd21fd68-osd--block--caff573e--485a--5d29--90dc--90eefd21fd68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4VIc2w-sS1q-hgag-jCCK-DQqD-F8UU-9JzOKT', 'scsi-0QEMU_QEMU_HARDDISK_38f6fb83-908a-4dc2-a0dd-a3bb8d4e5dee', 'scsi-SQEMU_QEMU_HARDDISK_38f6fb83-908a-4dc2-a0dd-a3bb8d4e5dee'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553004 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0c5dfb3--0a46--5f65--b869--b08108365918-osd--block--a0c5dfb3--0a46--5f65--b869--b08108365918', 'dm-uuid-LVM-kfNH1NnBEHxPOM95MFR51kfI2q2qlQpVOr4q5w24oDuwPe24et2QFHL52n2BudGm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553011 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b81412c7-c90d-434c-bce7-fcbaa76ae3c0', 'scsi-SQEMU_QEMU_HARDDISK_b81412c7-c90d-434c-bce7-fcbaa76ae3c0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553018 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553029 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553036 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.553043 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553054 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553066 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553073 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553080 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553091 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553098 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553113 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_cc255c2a-54c4-46d1-b37c-21de8fb436bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553121 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d4db71fd--07e0--550b--b185--dcfd36a5307b-osd--block--d4db71fd--07e0--550b--b185--dcfd36a5307b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-e9kad0-VhmD-NuZg-oCqb-z4kM-k78m-t9RP2d', 'scsi-0QEMU_QEMU_HARDDISK_3567b0e7-c22b-4a61-9c89-3afd695b5400', 'scsi-SQEMU_QEMU_HARDDISK_3567b0e7-c22b-4a61-9c89-3afd695b5400'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553132 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a0c5dfb3--0a46--5f65--b869--b08108365918-osd--block--a0c5dfb3--0a46--5f65--b869--b08108365918'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-usR8JB-9mkA-PaYY-xtc5-7Wti-Y4N5-0AfeXV', 'scsi-0QEMU_QEMU_HARDDISK_60eaf991-1ab4-4753-9c6a-a15ff08d271c', 'scsi-SQEMU_QEMU_HARDDISK_60eaf991-1ab4-4753-9c6a-a15ff08d271c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553142 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efb009a3-4323-4607-93cb-907bed8bb1e3', 'scsi-SQEMU_QEMU_HARDDISK_efb009a3-4323-4607-93cb-907bed8bb1e3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553152 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-06-20-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 07:14:56.553160 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.553166 | orchestrator | 2025-09-19 07:14:56.553173 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-19 07:14:56.553180 | orchestrator | Friday 19 September 2025 07:13:00 +0000 (0:00:00.634) 0:00:17.404 ****** 2025-09-19 07:14:56.553186 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.553193 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:14:56.553200 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:14:56.553206 | orchestrator | 2025-09-19 07:14:56.553213 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-19 07:14:56.553220 | orchestrator | Friday 19 September 2025 07:13:01 +0000 (0:00:00.703) 0:00:18.107 ****** 2025-09-19 07:14:56.553230 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.553237 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:14:56.553243 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:14:56.553250 | orchestrator | 2025-09-19 07:14:56.553257 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 07:14:56.553263 | orchestrator | Friday 19 September 2025 07:13:02 +0000 (0:00:00.482) 0:00:18.590 ****** 2025-09-19 07:14:56.553270 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.553276 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:14:56.553283 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:14:56.553290 | orchestrator | 2025-09-19 07:14:56.553296 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 07:14:56.553303 | orchestrator | Friday 19 September 2025 07:13:02 +0000 (0:00:00.654) 0:00:19.244 ****** 2025-09-19 07:14:56.553310 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.553316 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.553323 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.553330 | orchestrator | 2025-09-19 07:14:56.553336 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 07:14:56.553343 | orchestrator | Friday 19 September 2025 07:13:02 +0000 (0:00:00.312) 0:00:19.556 ****** 2025-09-19 07:14:56.553350 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.553356 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.553363 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.553370 | orchestrator | 2025-09-19 07:14:56.553376 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 07:14:56.553383 | orchestrator | Friday 19 September 2025 07:13:03 +0000 (0:00:00.452) 0:00:20.009 ****** 2025-09-19 07:14:56.553389 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.553396 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.553402 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.553409 | orchestrator | 2025-09-19 07:14:56.553416 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-19 07:14:56.553422 | orchestrator | Friday 19 September 2025 07:13:03 +0000 (0:00:00.519) 0:00:20.528 ****** 2025-09-19 07:14:56.553429 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-19 07:14:56.553436 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-19 07:14:56.553443 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-19 07:14:56.553449 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-19 07:14:56.553456 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-19 07:14:56.553462 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-19 07:14:56.553469 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-19 07:14:56.553475 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-19 07:14:56.553482 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-19 07:14:56.553488 | orchestrator | 2025-09-19 07:14:56.553495 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-19 07:14:56.553502 | orchestrator | Friday 19 September 2025 07:13:04 +0000 (0:00:00.836) 0:00:21.365 ****** 2025-09-19 07:14:56.553508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 07:14:56.553515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 07:14:56.553522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 07:14:56.553528 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.553535 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 07:14:56.553541 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 07:14:56.553548 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 07:14:56.553554 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.553561 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 07:14:56.553571 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 07:14:56.553581 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 07:14:56.553588 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.553595 | orchestrator | 2025-09-19 07:14:56.553602 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-19 07:14:56.553608 | orchestrator | Friday 19 September 2025 07:13:05 +0000 (0:00:00.353) 0:00:21.718 ****** 2025-09-19 07:14:56.553615 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:14:56.553622 | orchestrator | 2025-09-19 07:14:56.553629 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 07:14:56.553636 | orchestrator | Friday 19 September 2025 07:13:05 +0000 (0:00:00.706) 0:00:22.424 ****** 2025-09-19 07:14:56.553642 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.553649 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.553655 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.553662 | orchestrator | 2025-09-19 07:14:56.553672 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 07:14:56.553679 | orchestrator | Friday 19 September 2025 07:13:06 +0000 (0:00:00.365) 0:00:22.790 ****** 2025-09-19 07:14:56.553686 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.553692 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.553699 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.553705 | orchestrator | 2025-09-19 07:14:56.553712 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 07:14:56.553718 | orchestrator | Friday 19 September 2025 07:13:06 +0000 (0:00:00.319) 0:00:23.109 ****** 2025-09-19 07:14:56.553725 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.553732 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.553738 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:14:56.553778 | orchestrator | 2025-09-19 07:14:56.553785 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 07:14:56.553792 | orchestrator | Friday 19 September 2025 07:13:06 +0000 (0:00:00.334) 0:00:23.444 ****** 2025-09-19 07:14:56.553798 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.553805 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:14:56.553812 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:14:56.553818 | orchestrator | 2025-09-19 07:14:56.553825 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 07:14:56.553831 | orchestrator | Friday 19 September 2025 07:13:07 +0000 (0:00:00.572) 0:00:24.016 ****** 2025-09-19 07:14:56.553838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:14:56.553844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:14:56.553851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:14:56.553858 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.553864 | orchestrator | 2025-09-19 07:14:56.553871 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 07:14:56.553877 | orchestrator | Friday 19 September 2025 07:13:07 +0000 (0:00:00.377) 0:00:24.393 ****** 2025-09-19 07:14:56.553884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:14:56.553891 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:14:56.553897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:14:56.553904 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.553910 | orchestrator | 2025-09-19 07:14:56.553917 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 07:14:56.553924 | orchestrator | Friday 19 September 2025 07:13:08 +0000 (0:00:00.374) 0:00:24.768 ****** 2025-09-19 07:14:56.553930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 07:14:56.553937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 07:14:56.553948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 07:14:56.553955 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.553961 | orchestrator | 2025-09-19 07:14:56.553968 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 07:14:56.553974 | orchestrator | Friday 19 September 2025 07:13:08 +0000 (0:00:00.348) 0:00:25.117 ****** 2025-09-19 07:14:56.553981 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:14:56.553988 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:14:56.553994 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:14:56.554001 | orchestrator | 2025-09-19 07:14:56.554007 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 07:14:56.554038 | orchestrator | Friday 19 September 2025 07:13:08 +0000 (0:00:00.322) 0:00:25.439 ****** 2025-09-19 07:14:56.554047 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 07:14:56.554054 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 07:14:56.554061 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 07:14:56.554067 | orchestrator | 2025-09-19 07:14:56.554074 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-19 07:14:56.554081 | orchestrator | Friday 19 September 2025 07:13:09 +0000 (0:00:00.515) 0:00:25.954 ****** 2025-09-19 07:14:56.554087 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 07:14:56.554094 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:14:56.554100 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:14:56.554107 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 07:14:56.554114 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 07:14:56.554120 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 07:14:56.554127 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 07:14:56.554134 | orchestrator | 2025-09-19 07:14:56.554144 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-19 07:14:56.554151 | orchestrator | Friday 19 September 2025 07:13:10 +0000 (0:00:01.020) 0:00:26.975 ****** 2025-09-19 07:14:56.554157 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 07:14:56.554164 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 07:14:56.554170 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 07:14:56.554177 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 07:14:56.554183 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 07:14:56.554190 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 07:14:56.554197 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 07:14:56.554204 | orchestrator | 2025-09-19 07:14:56.554215 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-19 07:14:56.554222 | orchestrator | Friday 19 September 2025 07:13:12 +0000 (0:00:02.001) 0:00:28.977 ****** 2025-09-19 07:14:56.554229 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:14:56.554236 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:14:56.554242 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-19 07:14:56.554249 | orchestrator | 2025-09-19 07:14:56.554256 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-19 07:14:56.554262 | orchestrator | Friday 19 September 2025 07:13:12 +0000 (0:00:00.371) 0:00:29.348 ****** 2025-09-19 07:14:56.554269 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 07:14:56.554282 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 07:14:56.554289 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 07:14:56.554296 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 07:14:56.554303 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 07:14:56.554309 | orchestrator | 2025-09-19 07:14:56.554316 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-19 07:14:56.554323 | orchestrator | Friday 19 September 2025 07:14:00 +0000 (0:00:47.629) 0:01:16.978 ****** 2025-09-19 07:14:56.554329 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554336 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554342 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554349 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554355 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554362 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554368 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-19 07:14:56.554375 | orchestrator | 2025-09-19 07:14:56.554381 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-19 07:14:56.554388 | orchestrator | Friday 19 September 2025 07:14:25 +0000 (0:00:25.589) 0:01:42.567 ****** 2025-09-19 07:14:56.554395 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554401 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554408 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554414 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554421 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554432 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554439 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 07:14:56.554446 | orchestrator | 2025-09-19 07:14:56.554452 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-19 07:14:56.554459 | orchestrator | Friday 19 September 2025 07:14:38 +0000 (0:00:12.722) 0:01:55.290 ****** 2025-09-19 07:14:56.554465 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554472 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:14:56.554478 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:14:56.554485 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554496 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:14:56.554502 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:14:56.554512 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554519 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:14:56.554526 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:14:56.554532 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554539 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:14:56.554545 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:14:56.554552 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554558 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:14:56.554565 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:14:56.554571 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 07:14:56.554578 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 07:14:56.554584 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 07:14:56.554591 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-19 07:14:56.554598 | orchestrator | 2025-09-19 07:14:56.554604 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:14:56.554611 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-19 07:14:56.554619 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 07:14:56.554626 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 07:14:56.554632 | orchestrator | 2025-09-19 07:14:56.554639 | orchestrator | 2025-09-19 07:14:56.554646 | orchestrator | 2025-09-19 07:14:56.554652 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:14:56.554659 | orchestrator | Friday 19 September 2025 07:14:55 +0000 (0:00:16.594) 0:02:11.884 ****** 2025-09-19 07:14:56.554665 | orchestrator | =============================================================================== 2025-09-19 07:14:56.554672 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.63s 2025-09-19 07:14:56.554678 | orchestrator | generate keys ---------------------------------------------------------- 25.59s 2025-09-19 07:14:56.554685 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.59s 2025-09-19 07:14:56.554691 | orchestrator | get keys from monitors ------------------------------------------------- 12.72s 2025-09-19 07:14:56.554698 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.13s 2025-09-19 07:14:56.554704 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.00s 2025-09-19 07:14:56.554711 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.73s 2025-09-19 07:14:56.554717 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.02s 2025-09-19 07:14:56.554724 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.84s 2025-09-19 07:14:56.554730 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.81s 2025-09-19 07:14:56.554737 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.78s 2025-09-19 07:14:56.554777 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.75s 2025-09-19 07:14:56.554790 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.71s 2025-09-19 07:14:56.554796 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.70s 2025-09-19 07:14:56.554803 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.68s 2025-09-19 07:14:56.554810 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2025-09-19 07:14:56.554816 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2025-09-19 07:14:56.554823 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.63s 2025-09-19 07:14:56.554833 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.57s 2025-09-19 07:14:56.554840 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.57s 2025-09-19 07:14:56.554847 | orchestrator | 2025-09-19 07:14:56 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:56.554853 | orchestrator | 2025-09-19 07:14:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:14:59.597383 | orchestrator | 2025-09-19 07:14:59 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:14:59.602508 | orchestrator | 2025-09-19 07:14:59 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:14:59.604792 | orchestrator | 2025-09-19 07:14:59 | INFO  | Task 05bc9795-0326-4111-b440-b9cbd3c477d2 is in state STARTED 2025-09-19 07:14:59.604884 | orchestrator | 2025-09-19 07:14:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:02.649348 | orchestrator | 2025-09-19 07:15:02 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:02.651437 | orchestrator | 2025-09-19 07:15:02 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:15:02.653392 | orchestrator | 2025-09-19 07:15:02 | INFO  | Task 05bc9795-0326-4111-b440-b9cbd3c477d2 is in state STARTED 2025-09-19 07:15:02.653438 | orchestrator | 2025-09-19 07:15:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:05.702429 | orchestrator | 2025-09-19 07:15:05 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:05.703300 | orchestrator | 2025-09-19 07:15:05 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:15:05.708068 | orchestrator | 2025-09-19 07:15:05 | INFO  | Task 05bc9795-0326-4111-b440-b9cbd3c477d2 is in state STARTED 2025-09-19 07:15:05.708131 | orchestrator | 2025-09-19 07:15:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:08.759665 | orchestrator | 2025-09-19 07:15:08 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:08.760457 | orchestrator | 2025-09-19 07:15:08 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:15:08.762375 | orchestrator | 2025-09-19 07:15:08 | INFO  | Task 05bc9795-0326-4111-b440-b9cbd3c477d2 is in state STARTED 2025-09-19 07:15:08.762592 | orchestrator | 2025-09-19 07:15:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:11.823107 | orchestrator | 2025-09-19 07:15:11 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:11.823916 | orchestrator | 2025-09-19 07:15:11 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:15:11.826127 | orchestrator | 2025-09-19 07:15:11 | INFO  | Task 05bc9795-0326-4111-b440-b9cbd3c477d2 is in state STARTED 2025-09-19 07:15:11.826220 | orchestrator | 2025-09-19 07:15:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:14.881531 | orchestrator | 2025-09-19 07:15:14 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:14.883444 | orchestrator | 2025-09-19 07:15:14 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:15:14.885357 | orchestrator | 2025-09-19 07:15:14 | INFO  | Task 05bc9795-0326-4111-b440-b9cbd3c477d2 is in state STARTED 2025-09-19 07:15:14.885391 | orchestrator | 2025-09-19 07:15:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:17.935263 | orchestrator | 2025-09-19 07:15:17 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:17.937456 | orchestrator | 2025-09-19 07:15:17 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:15:17.939653 | orchestrator | 2025-09-19 07:15:17 | INFO  | Task 05bc9795-0326-4111-b440-b9cbd3c477d2 is in state STARTED 2025-09-19 07:15:17.939692 | orchestrator | 2025-09-19 07:15:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:20.992492 | orchestrator | 2025-09-19 07:15:20 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:20.994666 | orchestrator | 2025-09-19 07:15:20 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:15:20.996543 | orchestrator | 2025-09-19 07:15:20 | INFO  | Task 05bc9795-0326-4111-b440-b9cbd3c477d2 is in state STARTED 2025-09-19 07:15:20.996590 | orchestrator | 2025-09-19 07:15:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:24.053909 | orchestrator | 2025-09-19 07:15:24 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:24.056071 | orchestrator | 2025-09-19 07:15:24 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:15:24.057700 | orchestrator | 2025-09-19 07:15:24 | INFO  | Task 05bc9795-0326-4111-b440-b9cbd3c477d2 is in state STARTED 2025-09-19 07:15:24.058012 | orchestrator | 2025-09-19 07:15:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:27.110288 | orchestrator | 2025-09-19 07:15:27 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:27.112194 | orchestrator | 2025-09-19 07:15:27 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:15:27.114567 | orchestrator | 2025-09-19 07:15:27 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:15:27.116446 | orchestrator | 2025-09-19 07:15:27 | INFO  | Task 05bc9795-0326-4111-b440-b9cbd3c477d2 is in state SUCCESS 2025-09-19 07:15:27.116474 | orchestrator | 2025-09-19 07:15:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:30.170152 | orchestrator | 2025-09-19 07:15:30 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:30.171185 | orchestrator | 2025-09-19 07:15:30 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:15:30.173204 | orchestrator | 2025-09-19 07:15:30 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:15:30.173293 | orchestrator | 2025-09-19 07:15:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:33.222217 | orchestrator | 2025-09-19 07:15:33 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:33.225579 | orchestrator | 2025-09-19 07:15:33 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:15:33.227852 | orchestrator | 2025-09-19 07:15:33 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:15:33.228145 | orchestrator | 2025-09-19 07:15:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:36.272217 | orchestrator | 2025-09-19 07:15:36 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:36.273521 | orchestrator | 2025-09-19 07:15:36 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state STARTED 2025-09-19 07:15:36.276411 | orchestrator | 2025-09-19 07:15:36 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:15:36.276457 | orchestrator | 2025-09-19 07:15:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:39.324423 | orchestrator | 2025-09-19 07:15:39 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:39.329316 | orchestrator | 2025-09-19 07:15:39 | INFO  | Task 6cf6b7c6-c277-4385-aab5-a5f8939ea5fc is in state SUCCESS 2025-09-19 07:15:39.330704 | orchestrator | 2025-09-19 07:15:39.330770 | orchestrator | 2025-09-19 07:15:39.330784 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-19 07:15:39.330796 | orchestrator | 2025-09-19 07:15:39.330808 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-19 07:15:39.330819 | orchestrator | Friday 19 September 2025 07:14:59 +0000 (0:00:00.185) 0:00:00.185 ****** 2025-09-19 07:15:39.330830 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-19 07:15:39.330843 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 07:15:39.330854 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 07:15:39.330990 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 07:15:39.331006 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 07:15:39.331018 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-19 07:15:39.331070 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-19 07:15:39.331340 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-19 07:15:39.331358 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-19 07:15:39.331370 | orchestrator | 2025-09-19 07:15:39.331381 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-19 07:15:39.331392 | orchestrator | Friday 19 September 2025 07:15:04 +0000 (0:00:04.321) 0:00:04.506 ****** 2025-09-19 07:15:39.331403 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 07:15:39.331415 | orchestrator | 2025-09-19 07:15:39.331426 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-19 07:15:39.331437 | orchestrator | Friday 19 September 2025 07:15:05 +0000 (0:00:01.030) 0:00:05.537 ****** 2025-09-19 07:15:39.331464 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-19 07:15:39.331475 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 07:15:39.331487 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 07:15:39.331498 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 07:15:39.331510 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 07:15:39.331521 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-19 07:15:39.331532 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-19 07:15:39.331543 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-19 07:15:39.331553 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-19 07:15:39.331586 | orchestrator | 2025-09-19 07:15:39.331597 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-19 07:15:39.331608 | orchestrator | Friday 19 September 2025 07:15:18 +0000 (0:00:13.281) 0:00:18.818 ****** 2025-09-19 07:15:39.331619 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-19 07:15:39.331630 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 07:15:39.331641 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 07:15:39.331651 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 07:15:39.331662 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 07:15:39.331673 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-19 07:15:39.331683 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-19 07:15:39.331694 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-19 07:15:39.331704 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-19 07:15:39.331715 | orchestrator | 2025-09-19 07:15:39.331783 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:15:39.331796 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:15:39.331809 | orchestrator | 2025-09-19 07:15:39.331819 | orchestrator | 2025-09-19 07:15:39.331830 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:15:39.331841 | orchestrator | Friday 19 September 2025 07:15:25 +0000 (0:00:06.980) 0:00:25.798 ****** 2025-09-19 07:15:39.331851 | orchestrator | =============================================================================== 2025-09-19 07:15:39.331862 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.28s 2025-09-19 07:15:39.331873 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.98s 2025-09-19 07:15:39.331883 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.32s 2025-09-19 07:15:39.331894 | orchestrator | Create share directory -------------------------------------------------- 1.03s 2025-09-19 07:15:39.331905 | orchestrator | 2025-09-19 07:15:39.331915 | orchestrator | 2025-09-19 07:15:39.331926 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:15:39.331937 | orchestrator | 2025-09-19 07:15:39.331961 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:15:39.331973 | orchestrator | Friday 19 September 2025 07:13:52 +0000 (0:00:00.268) 0:00:00.268 ****** 2025-09-19 07:15:39.331986 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:15:39.331999 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:15:39.332011 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:15:39.332023 | orchestrator | 2025-09-19 07:15:39.332035 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:15:39.332048 | orchestrator | Friday 19 September 2025 07:13:53 +0000 (0:00:00.301) 0:00:00.570 ****** 2025-09-19 07:15:39.332058 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-19 07:15:39.332070 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-19 07:15:39.332080 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-19 07:15:39.332091 | orchestrator | 2025-09-19 07:15:39.332102 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-19 07:15:39.332113 | orchestrator | 2025-09-19 07:15:39.332123 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 07:15:39.332134 | orchestrator | Friday 19 September 2025 07:13:53 +0000 (0:00:00.426) 0:00:00.996 ****** 2025-09-19 07:15:39.332145 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:15:39.332155 | orchestrator | 2025-09-19 07:15:39.332175 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-19 07:15:39.332186 | orchestrator | Friday 19 September 2025 07:13:53 +0000 (0:00:00.525) 0:00:01.522 ****** 2025-09-19 07:15:39.332211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:15:39.332243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:15:39.332272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:15:39.332285 | orchestrator | 2025-09-19 07:15:39.332296 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-19 07:15:39.332307 | orchestrator | Friday 19 September 2025 07:13:55 +0000 (0:00:01.088) 0:00:02.610 ****** 2025-09-19 07:15:39.332318 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:15:39.332328 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:15:39.332339 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:15:39.332349 | orchestrator | 2025-09-19 07:15:39.332360 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 07:15:39.332371 | orchestrator | Friday 19 September 2025 07:13:55 +0000 (0:00:00.428) 0:00:03.039 ****** 2025-09-19 07:15:39.332381 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 07:15:39.332392 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 07:15:39.332410 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 07:15:39.332421 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 07:15:39.332431 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 07:15:39.332442 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 07:15:39.332453 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-19 07:15:39.332470 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 07:15:39.332481 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 07:15:39.332492 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 07:15:39.332502 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 07:15:39.332513 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 07:15:39.332523 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 07:15:39.332534 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 07:15:39.332544 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-19 07:15:39.332555 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 07:15:39.332566 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 07:15:39.332576 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 07:15:39.332587 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 07:15:39.332598 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 07:15:39.332613 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 07:15:39.332624 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 07:15:39.332635 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-19 07:15:39.332645 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 07:15:39.332657 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-19 07:15:39.332669 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-19 07:15:39.332680 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-19 07:15:39.332690 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-19 07:15:39.332701 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-19 07:15:39.332712 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-19 07:15:39.332722 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-19 07:15:39.332752 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-19 07:15:39.332763 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-19 07:15:39.332774 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-19 07:15:39.332784 | orchestrator | 2025-09-19 07:15:39.332795 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:15:39.332820 | orchestrator | Friday 19 September 2025 07:13:56 +0000 (0:00:00.769) 0:00:03.809 ****** 2025-09-19 07:15:39.332832 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:15:39.332842 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:15:39.332853 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:15:39.332864 | orchestrator | 2025-09-19 07:15:39.332875 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:15:39.332886 | orchestrator | Friday 19 September 2025 07:13:56 +0000 (0:00:00.309) 0:00:04.118 ****** 2025-09-19 07:15:39.332896 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.332907 | orchestrator | 2025-09-19 07:15:39.332918 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:15:39.332935 | orchestrator | Friday 19 September 2025 07:13:56 +0000 (0:00:00.138) 0:00:04.257 ****** 2025-09-19 07:15:39.332946 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.332957 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.332968 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.332979 | orchestrator | 2025-09-19 07:15:39.332990 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:15:39.333000 | orchestrator | Friday 19 September 2025 07:13:57 +0000 (0:00:00.457) 0:00:04.715 ****** 2025-09-19 07:15:39.333011 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:15:39.333022 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:15:39.333033 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:15:39.333043 | orchestrator | 2025-09-19 07:15:39.333054 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:15:39.333065 | orchestrator | Friday 19 September 2025 07:13:57 +0000 (0:00:00.310) 0:00:05.025 ****** 2025-09-19 07:15:39.333075 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.333086 | orchestrator | 2025-09-19 07:15:39.333097 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:15:39.333108 | orchestrator | Friday 19 September 2025 07:13:57 +0000 (0:00:00.127) 0:00:05.152 ****** 2025-09-19 07:15:39.333118 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.333129 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.333139 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.333150 | orchestrator | 2025-09-19 07:15:39.333161 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:15:39.333172 | orchestrator | Friday 19 September 2025 07:13:57 +0000 (0:00:00.289) 0:00:05.441 ****** 2025-09-19 07:15:39.333182 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:15:39.333193 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:15:39.333204 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:15:39.333214 | orchestrator | 2025-09-19 07:15:39.333225 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:15:39.333236 | orchestrator | Friday 19 September 2025 07:13:58 +0000 (0:00:00.287) 0:00:05.729 ****** 2025-09-19 07:15:39.333246 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.333257 | orchestrator | 2025-09-19 07:15:39.333268 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:15:39.333278 | orchestrator | Friday 19 September 2025 07:13:58 +0000 (0:00:00.140) 0:00:05.869 ****** 2025-09-19 07:15:39.333289 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.333304 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.333315 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.333326 | orchestrator | 2025-09-19 07:15:39.333337 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:15:39.333347 | orchestrator | Friday 19 September 2025 07:13:58 +0000 (0:00:00.494) 0:00:06.363 ****** 2025-09-19 07:15:39.333358 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:15:39.333368 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:15:39.333379 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:15:39.333390 | orchestrator | 2025-09-19 07:15:39.333401 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:15:39.333412 | orchestrator | Friday 19 September 2025 07:13:59 +0000 (0:00:00.300) 0:00:06.664 ****** 2025-09-19 07:15:39.333430 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.333441 | orchestrator | 2025-09-19 07:15:39.333451 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:15:39.333462 | orchestrator | Friday 19 September 2025 07:13:59 +0000 (0:00:00.135) 0:00:06.799 ****** 2025-09-19 07:15:39.333473 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.333484 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.333494 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.333505 | orchestrator | 2025-09-19 07:15:39.333516 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:15:39.333526 | orchestrator | Friday 19 September 2025 07:13:59 +0000 (0:00:00.304) 0:00:07.104 ****** 2025-09-19 07:15:39.333537 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:15:39.333548 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:15:39.333558 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:15:39.333569 | orchestrator | 2025-09-19 07:15:39.333580 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:15:39.333590 | orchestrator | Friday 19 September 2025 07:13:59 +0000 (0:00:00.293) 0:00:07.397 ****** 2025-09-19 07:15:39.333601 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.333612 | orchestrator | 2025-09-19 07:15:39.333622 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:15:39.333633 | orchestrator | Friday 19 September 2025 07:14:00 +0000 (0:00:00.324) 0:00:07.722 ****** 2025-09-19 07:15:39.333643 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.333654 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.333664 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.333675 | orchestrator | 2025-09-19 07:15:39.333686 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:15:39.333696 | orchestrator | Friday 19 September 2025 07:14:00 +0000 (0:00:00.311) 0:00:08.034 ****** 2025-09-19 07:15:39.333707 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:15:39.333718 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:15:39.333767 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:15:39.333779 | orchestrator | 2025-09-19 07:15:39.333790 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:15:39.333801 | orchestrator | Friday 19 September 2025 07:14:00 +0000 (0:00:00.322) 0:00:08.357 ****** 2025-09-19 07:15:39.333811 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.333822 | orchestrator | 2025-09-19 07:15:39.333832 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:15:39.333843 | orchestrator | Friday 19 September 2025 07:14:00 +0000 (0:00:00.123) 0:00:08.481 ****** 2025-09-19 07:15:39.333854 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.333865 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.333875 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.333886 | orchestrator | 2025-09-19 07:15:39.333896 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:15:39.333907 | orchestrator | Friday 19 September 2025 07:14:01 +0000 (0:00:00.318) 0:00:08.799 ****** 2025-09-19 07:15:39.333918 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:15:39.333929 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:15:39.333939 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:15:39.333950 | orchestrator | 2025-09-19 07:15:39.333967 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:15:39.333978 | orchestrator | Friday 19 September 2025 07:14:01 +0000 (0:00:00.496) 0:00:09.295 ****** 2025-09-19 07:15:39.333989 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.334000 | orchestrator | 2025-09-19 07:15:39.334010 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:15:39.334075 | orchestrator | Friday 19 September 2025 07:14:01 +0000 (0:00:00.122) 0:00:09.418 ****** 2025-09-19 07:15:39.334087 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.334097 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.334116 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.334126 | orchestrator | 2025-09-19 07:15:39.334137 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:15:39.334148 | orchestrator | Friday 19 September 2025 07:14:02 +0000 (0:00:00.295) 0:00:09.714 ****** 2025-09-19 07:15:39.334159 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:15:39.334170 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:15:39.334181 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:15:39.334191 | orchestrator | 2025-09-19 07:15:39.334202 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:15:39.334213 | orchestrator | Friday 19 September 2025 07:14:02 +0000 (0:00:00.351) 0:00:10.065 ****** 2025-09-19 07:15:39.334224 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.334235 | orchestrator | 2025-09-19 07:15:39.334245 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:15:39.334256 | orchestrator | Friday 19 September 2025 07:14:02 +0000 (0:00:00.154) 0:00:10.220 ****** 2025-09-19 07:15:39.334267 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.334278 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.334288 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.334299 | orchestrator | 2025-09-19 07:15:39.334310 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:15:39.334321 | orchestrator | Friday 19 September 2025 07:14:02 +0000 (0:00:00.279) 0:00:10.500 ****** 2025-09-19 07:15:39.334331 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:15:39.334342 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:15:39.334353 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:15:39.334363 | orchestrator | 2025-09-19 07:15:39.334374 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:15:39.334392 | orchestrator | Friday 19 September 2025 07:14:03 +0000 (0:00:00.513) 0:00:11.013 ****** 2025-09-19 07:15:39.334404 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.334415 | orchestrator | 2025-09-19 07:15:39.334426 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:15:39.334436 | orchestrator | Friday 19 September 2025 07:14:03 +0000 (0:00:00.124) 0:00:11.138 ****** 2025-09-19 07:15:39.334447 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.334458 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.334469 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.334480 | orchestrator | 2025-09-19 07:15:39.334490 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 07:15:39.334501 | orchestrator | Friday 19 September 2025 07:14:03 +0000 (0:00:00.298) 0:00:11.436 ****** 2025-09-19 07:15:39.334512 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:15:39.334523 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:15:39.334534 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:15:39.334544 | orchestrator | 2025-09-19 07:15:39.334555 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 07:15:39.334566 | orchestrator | Friday 19 September 2025 07:14:04 +0000 (0:00:00.351) 0:00:11.787 ****** 2025-09-19 07:15:39.334577 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.334588 | orchestrator | 2025-09-19 07:15:39.334599 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 07:15:39.334610 | orchestrator | Friday 19 September 2025 07:14:04 +0000 (0:00:00.131) 0:00:11.919 ****** 2025-09-19 07:15:39.334620 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.334631 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.334642 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.334652 | orchestrator | 2025-09-19 07:15:39.334663 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-19 07:15:39.334674 | orchestrator | Friday 19 September 2025 07:14:04 +0000 (0:00:00.469) 0:00:12.388 ****** 2025-09-19 07:15:39.334685 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:15:39.334695 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:15:39.334706 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:15:39.334723 | orchestrator | 2025-09-19 07:15:39.334757 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-19 07:15:39.334768 | orchestrator | Friday 19 September 2025 07:14:06 +0000 (0:00:01.725) 0:00:14.114 ****** 2025-09-19 07:15:39.334778 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 07:15:39.334789 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 07:15:39.334800 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 07:15:39.334810 | orchestrator | 2025-09-19 07:15:39.334821 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-19 07:15:39.334831 | orchestrator | Friday 19 September 2025 07:14:08 +0000 (0:00:01.843) 0:00:15.958 ****** 2025-09-19 07:15:39.334842 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 07:15:39.334853 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 07:15:39.334864 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 07:15:39.334875 | orchestrator | 2025-09-19 07:15:39.334886 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-19 07:15:39.334896 | orchestrator | Friday 19 September 2025 07:14:10 +0000 (0:00:02.263) 0:00:18.221 ****** 2025-09-19 07:15:39.334915 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 07:15:39.334926 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 07:15:39.334937 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 07:15:39.334948 | orchestrator | 2025-09-19 07:15:39.334959 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-19 07:15:39.334969 | orchestrator | Friday 19 September 2025 07:14:12 +0000 (0:00:01.898) 0:00:20.120 ****** 2025-09-19 07:15:39.334980 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.334991 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.335002 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.335012 | orchestrator | 2025-09-19 07:15:39.335023 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-19 07:15:39.335034 | orchestrator | Friday 19 September 2025 07:14:12 +0000 (0:00:00.398) 0:00:20.519 ****** 2025-09-19 07:15:39.335044 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.335055 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.335066 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.335076 | orchestrator | 2025-09-19 07:15:39.335087 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 07:15:39.335098 | orchestrator | Friday 19 September 2025 07:14:13 +0000 (0:00:00.349) 0:00:20.868 ****** 2025-09-19 07:15:39.335109 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:15:39.335119 | orchestrator | 2025-09-19 07:15:39.335130 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-19 07:15:39.335141 | orchestrator | Friday 19 September 2025 07:14:13 +0000 (0:00:00.555) 0:00:21.424 ****** 2025-09-19 07:15:39.335164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:15:39.335194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:15:39.335213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:15:39.335232 | orchestrator | 2025-09-19 07:15:39.335243 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-19 07:15:39.335254 | orchestrator | Friday 19 September 2025 07:14:15 +0000 (0:00:01.851) 0:00:23.275 ****** 2025-09-19 07:15:39.335281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:15:39.335304 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.335316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:15:39.335334 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.335352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:15:39.335370 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.335381 | orchestrator | 2025-09-19 07:15:39.335392 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-19 07:15:39.335403 | orchestrator | Friday 19 September 2025 07:14:16 +0000 (0:00:00.645) 0:00:23.921 ****** 2025-09-19 07:15:39.335423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:15:39.335436 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.335453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:15:39.335471 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.335490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 07:15:39.335503 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.335514 | orchestrator | 2025-09-19 07:15:39.335524 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-19 07:15:39.335535 | orchestrator | Friday 19 September 2025 07:14:17 +0000 (0:00:00.814) 0:00:24.736 ****** 2025-09-19 07:15:39.335552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:15:39.335580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:15:39.335605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 07:15:39.335618 | orchestrator | 2025-09-19 07:15:39.335629 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 07:15:39.335640 | orchestrator | Friday 19 September 2025 07:14:18 +0000 (0:00:01.446) 0:00:26.182 ****** 2025-09-19 07:15:39.335651 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:15:39.335662 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:15:39.335672 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:15:39.335683 | orchestrator | 2025-09-19 07:15:39.335694 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 07:15:39.335704 | orchestrator | Friday 19 September 2025 07:14:18 +0000 (0:00:00.303) 0:00:26.486 ****** 2025-09-19 07:15:39.335715 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:15:39.335744 | orchestrator | 2025-09-19 07:15:39.335755 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-19 07:15:39.335766 | orchestrator | Friday 19 September 2025 07:14:19 +0000 (0:00:00.591) 0:00:27.077 ****** 2025-09-19 07:15:39.335777 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:15:39.335788 | orchestrator | 2025-09-19 07:15:39.335804 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-19 07:15:39.335815 | orchestrator | Friday 19 September 2025 07:14:21 +0000 (0:00:02.306) 0:00:29.384 ****** 2025-09-19 07:15:39.335825 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:15:39.335836 | orchestrator | 2025-09-19 07:15:39.335847 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-19 07:15:39.335858 | orchestrator | Friday 19 September 2025 07:14:24 +0000 (0:00:02.712) 0:00:32.097 ****** 2025-09-19 07:15:39.335868 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:15:39.335879 | orchestrator | 2025-09-19 07:15:39.335890 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 07:15:39.335908 | orchestrator | Friday 19 September 2025 07:14:40 +0000 (0:00:15.717) 0:00:47.814 ****** 2025-09-19 07:15:39.335918 | orchestrator | 2025-09-19 07:15:39.335929 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 07:15:39.335940 | orchestrator | Friday 19 September 2025 07:14:40 +0000 (0:00:00.093) 0:00:47.907 ****** 2025-09-19 07:15:39.335951 | orchestrator | 2025-09-19 07:15:39.335961 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 07:15:39.335972 | orchestrator | Friday 19 September 2025 07:14:40 +0000 (0:00:00.100) 0:00:48.008 ****** 2025-09-19 07:15:39.335983 | orchestrator | 2025-09-19 07:15:39.335993 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-19 07:15:39.336004 | orchestrator | Friday 19 September 2025 07:14:40 +0000 (0:00:00.080) 0:00:48.089 ****** 2025-09-19 07:15:39.336014 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:15:39.336025 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:15:39.336036 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:15:39.336047 | orchestrator | 2025-09-19 07:15:39.336057 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:15:39.336068 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-19 07:15:39.336079 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-19 07:15:39.336098 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-19 07:15:39.336109 | orchestrator | 2025-09-19 07:15:39.336120 | orchestrator | 2025-09-19 07:15:39.336130 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:15:39.336141 | orchestrator | Friday 19 September 2025 07:15:38 +0000 (0:00:58.289) 0:01:46.378 ****** 2025-09-19 07:15:39.336152 | orchestrator | =============================================================================== 2025-09-19 07:15:39.336163 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.29s 2025-09-19 07:15:39.336173 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.72s 2025-09-19 07:15:39.336184 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.71s 2025-09-19 07:15:39.336194 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.31s 2025-09-19 07:15:39.336205 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.26s 2025-09-19 07:15:39.336216 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.90s 2025-09-19 07:15:39.336226 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.85s 2025-09-19 07:15:39.336237 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.84s 2025-09-19 07:15:39.336248 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.73s 2025-09-19 07:15:39.336258 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.45s 2025-09-19 07:15:39.336269 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.09s 2025-09-19 07:15:39.336280 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.81s 2025-09-19 07:15:39.336290 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2025-09-19 07:15:39.336301 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2025-09-19 07:15:39.336312 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.59s 2025-09-19 07:15:39.336322 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2025-09-19 07:15:39.336333 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2025-09-19 07:15:39.336350 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-09-19 07:15:39.336361 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-09-19 07:15:39.336371 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.49s 2025-09-19 07:15:39.336382 | orchestrator | 2025-09-19 07:15:39 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:15:39.336393 | orchestrator | 2025-09-19 07:15:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:42.371060 | orchestrator | 2025-09-19 07:15:42 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:42.371321 | orchestrator | 2025-09-19 07:15:42 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:15:42.371444 | orchestrator | 2025-09-19 07:15:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:45.409138 | orchestrator | 2025-09-19 07:15:45 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:45.409286 | orchestrator | 2025-09-19 07:15:45 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:15:45.409314 | orchestrator | 2025-09-19 07:15:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:48.457709 | orchestrator | 2025-09-19 07:15:48 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:48.463905 | orchestrator | 2025-09-19 07:15:48 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:15:48.464468 | orchestrator | 2025-09-19 07:15:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:51.528769 | orchestrator | 2025-09-19 07:15:51 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:51.530153 | orchestrator | 2025-09-19 07:15:51 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:15:51.530198 | orchestrator | 2025-09-19 07:15:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:54.575228 | orchestrator | 2025-09-19 07:15:54 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:54.576502 | orchestrator | 2025-09-19 07:15:54 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:15:54.576784 | orchestrator | 2025-09-19 07:15:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:15:57.614593 | orchestrator | 2025-09-19 07:15:57 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:15:57.615941 | orchestrator | 2025-09-19 07:15:57 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:15:57.615973 | orchestrator | 2025-09-19 07:15:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:00.661248 | orchestrator | 2025-09-19 07:16:00 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:16:00.664005 | orchestrator | 2025-09-19 07:16:00 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:16:00.664046 | orchestrator | 2025-09-19 07:16:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:03.716678 | orchestrator | 2025-09-19 07:16:03 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:16:03.717424 | orchestrator | 2025-09-19 07:16:03 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:16:03.717456 | orchestrator | 2025-09-19 07:16:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:06.760251 | orchestrator | 2025-09-19 07:16:06 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:16:06.761805 | orchestrator | 2025-09-19 07:16:06 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:16:06.761859 | orchestrator | 2025-09-19 07:16:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:09.807253 | orchestrator | 2025-09-19 07:16:09 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:16:09.810293 | orchestrator | 2025-09-19 07:16:09 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:16:09.810589 | orchestrator | 2025-09-19 07:16:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:12.859637 | orchestrator | 2025-09-19 07:16:12 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:16:12.862325 | orchestrator | 2025-09-19 07:16:12 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:16:12.862360 | orchestrator | 2025-09-19 07:16:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:15.909597 | orchestrator | 2025-09-19 07:16:15 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:16:15.910782 | orchestrator | 2025-09-19 07:16:15 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:16:15.910834 | orchestrator | 2025-09-19 07:16:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:18.963216 | orchestrator | 2025-09-19 07:16:18 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:16:18.963467 | orchestrator | 2025-09-19 07:16:18 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state STARTED 2025-09-19 07:16:18.963484 | orchestrator | 2025-09-19 07:16:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:22.018377 | orchestrator | 2025-09-19 07:16:22 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:16:22.019136 | orchestrator | 2025-09-19 07:16:22 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:16:22.020937 | orchestrator | 2025-09-19 07:16:22 | INFO  | Task 7b9012bf-d2b0-42a3-95f3-089f521306b4 is in state STARTED 2025-09-19 07:16:22.024131 | orchestrator | 2025-09-19 07:16:22 | INFO  | Task 23c61228-7b7d-4895-b513-2f1f913e793a is in state SUCCESS 2025-09-19 07:16:22.025166 | orchestrator | 2025-09-19 07:16:22 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:16:22.025527 | orchestrator | 2025-09-19 07:16:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:25.089976 | orchestrator | 2025-09-19 07:16:25 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:16:25.091372 | orchestrator | 2025-09-19 07:16:25 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:16:25.092995 | orchestrator | 2025-09-19 07:16:25 | INFO  | Task 7b9012bf-d2b0-42a3-95f3-089f521306b4 is in state STARTED 2025-09-19 07:16:25.094113 | orchestrator | 2025-09-19 07:16:25 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:16:25.094210 | orchestrator | 2025-09-19 07:16:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:28.126699 | orchestrator | 2025-09-19 07:16:28 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:16:28.126822 | orchestrator | 2025-09-19 07:16:28 | INFO  | Task ccde68b7-e1bf-4ccf-b892-5ce2c2931b79 is in state STARTED 2025-09-19 07:16:28.127362 | orchestrator | 2025-09-19 07:16:28 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:16:28.127973 | orchestrator | 2025-09-19 07:16:28 | INFO  | Task 7b9012bf-d2b0-42a3-95f3-089f521306b4 is in state SUCCESS 2025-09-19 07:16:28.128836 | orchestrator | 2025-09-19 07:16:28 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:16:28.129258 | orchestrator | 2025-09-19 07:16:28 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:16:28.129796 | orchestrator | 2025-09-19 07:16:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:31.239849 | orchestrator | 2025-09-19 07:16:31 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:16:31.239969 | orchestrator | 2025-09-19 07:16:31 | INFO  | Task ccde68b7-e1bf-4ccf-b892-5ce2c2931b79 is in state STARTED 2025-09-19 07:16:31.240998 | orchestrator | 2025-09-19 07:16:31 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:16:31.242005 | orchestrator | 2025-09-19 07:16:31 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:16:31.242786 | orchestrator | 2025-09-19 07:16:31 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:16:31.242823 | orchestrator | 2025-09-19 07:16:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:34.297301 | orchestrator | 2025-09-19 07:16:34 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:16:34.297410 | orchestrator | 2025-09-19 07:16:34 | INFO  | Task ccde68b7-e1bf-4ccf-b892-5ce2c2931b79 is in state STARTED 2025-09-19 07:16:34.297426 | orchestrator | 2025-09-19 07:16:34 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state STARTED 2025-09-19 07:16:34.297438 | orchestrator | 2025-09-19 07:16:34 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:16:34.297449 | orchestrator | 2025-09-19 07:16:34 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:16:34.297460 | orchestrator | 2025-09-19 07:16:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:37.311239 | orchestrator | 2025-09-19 07:16:37 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:16:37.311456 | orchestrator | 2025-09-19 07:16:37 | INFO  | Task ccde68b7-e1bf-4ccf-b892-5ce2c2931b79 is in state STARTED 2025-09-19 07:16:37.313347 | orchestrator | 2025-09-19 07:16:37 | INFO  | Task 97203133-f38f-4cd5-910e-dec6d1b0630c is in state SUCCESS 2025-09-19 07:16:37.314844 | orchestrator | 2025-09-19 07:16:37.315018 | orchestrator | 2025-09-19 07:16:37.315032 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-19 07:16:37.315044 | orchestrator | 2025-09-19 07:16:37.315055 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-19 07:16:37.315067 | orchestrator | Friday 19 September 2025 07:15:29 +0000 (0:00:00.245) 0:00:00.245 ****** 2025-09-19 07:16:37.315078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-19 07:16:37.315091 | orchestrator | 2025-09-19 07:16:37.315102 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-19 07:16:37.315113 | orchestrator | Friday 19 September 2025 07:15:29 +0000 (0:00:00.242) 0:00:00.487 ****** 2025-09-19 07:16:37.315124 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-19 07:16:37.315135 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-19 07:16:37.315748 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-19 07:16:37.315773 | orchestrator | 2025-09-19 07:16:37.315793 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-19 07:16:37.315814 | orchestrator | Friday 19 September 2025 07:15:31 +0000 (0:00:01.213) 0:00:01.700 ****** 2025-09-19 07:16:37.315860 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-19 07:16:37.315872 | orchestrator | 2025-09-19 07:16:37.315883 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-19 07:16:37.315894 | orchestrator | Friday 19 September 2025 07:15:32 +0000 (0:00:01.099) 0:00:02.800 ****** 2025-09-19 07:16:37.315905 | orchestrator | changed: [testbed-manager] 2025-09-19 07:16:37.315916 | orchestrator | 2025-09-19 07:16:37.315926 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-19 07:16:37.315937 | orchestrator | Friday 19 September 2025 07:15:33 +0000 (0:00:01.155) 0:00:03.955 ****** 2025-09-19 07:16:37.315947 | orchestrator | changed: [testbed-manager] 2025-09-19 07:16:37.315958 | orchestrator | 2025-09-19 07:16:37.315969 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-19 07:16:37.315979 | orchestrator | Friday 19 September 2025 07:15:34 +0000 (0:00:00.922) 0:00:04.878 ****** 2025-09-19 07:16:37.315990 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-19 07:16:37.316001 | orchestrator | ok: [testbed-manager] 2025-09-19 07:16:37.316011 | orchestrator | 2025-09-19 07:16:37.316022 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-19 07:16:37.316044 | orchestrator | Friday 19 September 2025 07:16:09 +0000 (0:00:35.505) 0:00:40.384 ****** 2025-09-19 07:16:37.316056 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-19 07:16:37.316067 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-19 07:16:37.316078 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-19 07:16:37.316088 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-19 07:16:37.316099 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-19 07:16:37.316109 | orchestrator | 2025-09-19 07:16:37.316120 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-19 07:16:37.316131 | orchestrator | Friday 19 September 2025 07:16:13 +0000 (0:00:04.187) 0:00:44.571 ****** 2025-09-19 07:16:37.316142 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-19 07:16:37.316153 | orchestrator | 2025-09-19 07:16:37.316163 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-19 07:16:37.316174 | orchestrator | Friday 19 September 2025 07:16:14 +0000 (0:00:00.454) 0:00:45.026 ****** 2025-09-19 07:16:37.316185 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:16:37.316195 | orchestrator | 2025-09-19 07:16:37.316206 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-19 07:16:37.316217 | orchestrator | Friday 19 September 2025 07:16:14 +0000 (0:00:00.144) 0:00:45.171 ****** 2025-09-19 07:16:37.316227 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:16:37.316238 | orchestrator | 2025-09-19 07:16:37.316248 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-19 07:16:37.316259 | orchestrator | Friday 19 September 2025 07:16:14 +0000 (0:00:00.325) 0:00:45.496 ****** 2025-09-19 07:16:37.316270 | orchestrator | changed: [testbed-manager] 2025-09-19 07:16:37.316280 | orchestrator | 2025-09-19 07:16:37.316291 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-19 07:16:37.316302 | orchestrator | Friday 19 September 2025 07:16:17 +0000 (0:00:02.195) 0:00:47.691 ****** 2025-09-19 07:16:37.316312 | orchestrator | changed: [testbed-manager] 2025-09-19 07:16:37.316325 | orchestrator | 2025-09-19 07:16:37.316337 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-19 07:16:37.316349 | orchestrator | Friday 19 September 2025 07:16:17 +0000 (0:00:00.844) 0:00:48.535 ****** 2025-09-19 07:16:37.316361 | orchestrator | changed: [testbed-manager] 2025-09-19 07:16:37.316373 | orchestrator | 2025-09-19 07:16:37.316385 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-19 07:16:37.316397 | orchestrator | Friday 19 September 2025 07:16:18 +0000 (0:00:00.627) 0:00:49.162 ****** 2025-09-19 07:16:37.316409 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-19 07:16:37.316427 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-19 07:16:37.316438 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-19 07:16:37.316449 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-19 07:16:37.316459 | orchestrator | 2025-09-19 07:16:37.316470 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:16:37.316481 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 07:16:37.316491 | orchestrator | 2025-09-19 07:16:37.316502 | orchestrator | 2025-09-19 07:16:37.316552 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:16:37.316565 | orchestrator | Friday 19 September 2025 07:16:20 +0000 (0:00:01.465) 0:00:50.628 ****** 2025-09-19 07:16:37.316576 | orchestrator | =============================================================================== 2025-09-19 07:16:37.316586 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.51s 2025-09-19 07:16:37.316597 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.19s 2025-09-19 07:16:37.316608 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.20s 2025-09-19 07:16:37.316619 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.47s 2025-09-19 07:16:37.316629 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.21s 2025-09-19 07:16:37.316640 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.16s 2025-09-19 07:16:37.316651 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.10s 2025-09-19 07:16:37.316661 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.92s 2025-09-19 07:16:37.316672 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.84s 2025-09-19 07:16:37.316683 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.63s 2025-09-19 07:16:37.316694 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2025-09-19 07:16:37.316704 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.32s 2025-09-19 07:16:37.316745 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2025-09-19 07:16:37.316756 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-09-19 07:16:37.316767 | orchestrator | 2025-09-19 07:16:37.316778 | orchestrator | 2025-09-19 07:16:37.316788 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:16:37.316799 | orchestrator | 2025-09-19 07:16:37.316810 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:16:37.316821 | orchestrator | Friday 19 September 2025 07:16:24 +0000 (0:00:00.182) 0:00:00.182 ****** 2025-09-19 07:16:37.316905 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:16:37.316920 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:16:37.316931 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:16:37.316942 | orchestrator | 2025-09-19 07:16:37.316953 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:16:37.316964 | orchestrator | Friday 19 September 2025 07:16:24 +0000 (0:00:00.304) 0:00:00.487 ****** 2025-09-19 07:16:37.316975 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-19 07:16:37.316993 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-19 07:16:37.317004 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-19 07:16:37.317015 | orchestrator | 2025-09-19 07:16:37.317026 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-19 07:16:37.317036 | orchestrator | 2025-09-19 07:16:37.317047 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-19 07:16:37.317058 | orchestrator | Friday 19 September 2025 07:16:25 +0000 (0:00:00.603) 0:00:01.090 ****** 2025-09-19 07:16:37.317068 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:16:37.317087 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:16:37.317098 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:16:37.317108 | orchestrator | 2025-09-19 07:16:37.317119 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:16:37.317130 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:16:37.317142 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:16:37.317152 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:16:37.317163 | orchestrator | 2025-09-19 07:16:37.317174 | orchestrator | 2025-09-19 07:16:37.317185 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:16:37.317195 | orchestrator | Friday 19 September 2025 07:16:25 +0000 (0:00:00.643) 0:00:01.734 ****** 2025-09-19 07:16:37.317206 | orchestrator | =============================================================================== 2025-09-19 07:16:37.317217 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.64s 2025-09-19 07:16:37.317227 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-09-19 07:16:37.317238 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-19 07:16:37.317248 | orchestrator | 2025-09-19 07:16:37.317259 | orchestrator | 2025-09-19 07:16:37.317270 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:16:37.317280 | orchestrator | 2025-09-19 07:16:37.317291 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:16:37.317301 | orchestrator | Friday 19 September 2025 07:13:52 +0000 (0:00:00.266) 0:00:00.266 ****** 2025-09-19 07:16:37.317312 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:16:37.317323 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:16:37.317333 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:16:37.317344 | orchestrator | 2025-09-19 07:16:37.317355 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:16:37.317365 | orchestrator | Friday 19 September 2025 07:13:52 +0000 (0:00:00.278) 0:00:00.544 ****** 2025-09-19 07:16:37.317376 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-19 07:16:37.317387 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-19 07:16:37.317398 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-19 07:16:37.317408 | orchestrator | 2025-09-19 07:16:37.317419 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-19 07:16:37.317430 | orchestrator | 2025-09-19 07:16:37.317473 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 07:16:37.317487 | orchestrator | Friday 19 September 2025 07:13:53 +0000 (0:00:00.399) 0:00:00.944 ****** 2025-09-19 07:16:37.317498 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:16:37.317508 | orchestrator | 2025-09-19 07:16:37.317519 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-19 07:16:37.317529 | orchestrator | Friday 19 September 2025 07:13:53 +0000 (0:00:00.515) 0:00:01.460 ****** 2025-09-19 07:16:37.317619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.317858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.317892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.317906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:16:37.317979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:16:37.317993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:16:37.318012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.318084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.318097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.318109 | orchestrator | 2025-09-19 07:16:37.318121 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-19 07:16:37.318132 | orchestrator | Friday 19 September 2025 07:13:55 +0000 (0:00:01.678) 0:00:03.139 ****** 2025-09-19 07:16:37.318144 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-19 07:16:37.318155 | orchestrator | 2025-09-19 07:16:37.318165 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-19 07:16:37.318175 | orchestrator | Friday 19 September 2025 07:13:56 +0000 (0:00:00.827) 0:00:03.966 ****** 2025-09-19 07:16:37.318185 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:16:37.318195 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:16:37.318205 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:16:37.318215 | orchestrator | 2025-09-19 07:16:37.318224 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-19 07:16:37.318234 | orchestrator | Friday 19 September 2025 07:13:56 +0000 (0:00:00.533) 0:00:04.500 ****** 2025-09-19 07:16:37.318243 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:16:37.318253 | orchestrator | 2025-09-19 07:16:37.318263 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 07:16:37.318273 | orchestrator | Friday 19 September 2025 07:13:57 +0000 (0:00:00.690) 0:00:05.190 ****** 2025-09-19 07:16:37.318282 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:16:37.318292 | orchestrator | 2025-09-19 07:16:37.318308 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-19 07:16:37.318318 | orchestrator | Friday 19 September 2025 07:13:58 +0000 (0:00:00.519) 0:00:05.710 ****** 2025-09-19 07:16:37.318329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.318346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.318385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.318397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:16:37.318416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:16:37.318432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:16:37.318442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.318456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.318466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.318476 | orchestrator | 2025-09-19 07:16:37.318486 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-19 07:16:37.318495 | orchestrator | Friday 19 September 2025 07:14:01 +0000 (0:00:03.197) 0:00:08.908 ****** 2025-09-19 07:16:37.318506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:16:37.318523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:16:37.318546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:16:37.318556 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:16:37.318570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:16:37.318581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:16:37.318592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:16:37.318602 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:16:37.318618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:16:37.318634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:16:37.318645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:16:37.318656 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:16:37.318665 | orchestrator | 2025-09-19 07:16:37.318675 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-19 07:16:37.318684 | orchestrator | Friday 19 September 2025 07:14:02 +0000 (0:00:00.734) 0:00:09.642 ****** 2025-09-19 07:16:37.318698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:16:37.318728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:16:37.318739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:16:37.318755 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:16:37.318772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:16:37.318783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:16:37.318797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:16:37.318808 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:16:37.318818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 07:16:37.318829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:16:37.318849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 07:16:37.318859 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:16:37.318869 | orchestrator | 2025-09-19 07:16:37.318878 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-19 07:16:37.318888 | orchestrator | Friday 19 September 2025 07:14:02 +0000 (0:00:00.717) 0:00:10.360 ****** 2025-09-19 07:16:37.318898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.318913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.318924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.318946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:16:37.318957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:16:37.318967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:16:37.318980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.318991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.319001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.319022 | orchestrator | 2025-09-19 07:16:37.319032 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-19 07:16:37.319042 | orchestrator | Friday 19 September 2025 07:14:05 +0000 (0:00:03.191) 0:00:13.551 ****** 2025-09-19 07:16:37.319057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.319068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:16:37.319079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.319094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:16:37.319104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.319120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:16:37.319136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.319146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.319156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.319166 | orchestrator | 2025-09-19 07:16:37.319175 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-19 07:16:37.319189 | orchestrator | Friday 19 September 2025 07:14:11 +0000 (0:00:05.355) 0:00:18.907 ****** 2025-09-19 07:16:37.319198 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:16:37.319208 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:16:37.319217 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:16:37.319227 | orchestrator | 2025-09-19 07:16:37.319236 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-19 07:16:37.319246 | orchestrator | Friday 19 September 2025 07:14:12 +0000 (0:00:01.433) 0:00:20.340 ****** 2025-09-19 07:16:37.319255 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:16:37.319269 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:16:37.319279 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:16:37.319289 | orchestrator | 2025-09-19 07:16:37.319298 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-19 07:16:37.319308 | orchestrator | Friday 19 September 2025 07:14:13 +0000 (0:00:00.558) 0:00:20.899 ****** 2025-09-19 07:16:37.319317 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:16:37.319326 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:16:37.319336 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:16:37.319345 | orchestrator | 2025-09-19 07:16:37.319355 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-19 07:16:37.319364 | orchestrator | Friday 19 September 2025 07:14:13 +0000 (0:00:00.288) 0:00:21.188 ****** 2025-09-19 07:16:37.319373 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:16:37.319383 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:16:37.319392 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:16:37.319402 | orchestrator | 2025-09-19 07:16:37.319411 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-19 07:16:37.319420 | orchestrator | Friday 19 September 2025 07:14:14 +0000 (0:00:00.481) 0:00:21.669 ****** 2025-09-19 07:16:37.319431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.319447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:16:37.319458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.319475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:16:37.319492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.319503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 07:16:37.319519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.319529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.319539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.319554 | orchestrator | 2025-09-19 07:16:37.319564 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 07:16:37.319573 | orchestrator | Friday 19 September 2025 07:14:16 +0000 (0:00:02.358) 0:00:24.028 ****** 2025-09-19 07:16:37.319583 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:16:37.319593 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:16:37.319602 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:16:37.319612 | orchestrator | 2025-09-19 07:16:37.319625 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-19 07:16:37.319635 | orchestrator | Friday 19 September 2025 07:14:16 +0000 (0:00:00.307) 0:00:24.336 ****** 2025-09-19 07:16:37.319644 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 07:16:37.319654 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 07:16:37.319664 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 07:16:37.319673 | orchestrator | 2025-09-19 07:16:37.319683 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-19 07:16:37.319692 | orchestrator | Friday 19 September 2025 07:14:18 +0000 (0:00:01.665) 0:00:26.001 ****** 2025-09-19 07:16:37.319702 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:16:37.319726 | orchestrator | 2025-09-19 07:16:37.319736 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-19 07:16:37.319746 | orchestrator | Friday 19 September 2025 07:14:19 +0000 (0:00:00.857) 0:00:26.858 ****** 2025-09-19 07:16:37.319755 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:16:37.319765 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:16:37.319774 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:16:37.319784 | orchestrator | 2025-09-19 07:16:37.319793 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-19 07:16:37.319803 | orchestrator | Friday 19 September 2025 07:14:20 +0000 (0:00:00.766) 0:00:27.625 ****** 2025-09-19 07:16:37.319812 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 07:16:37.319822 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:16:37.319831 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 07:16:37.319841 | orchestrator | 2025-09-19 07:16:37.319850 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-19 07:16:37.319860 | orchestrator | Friday 19 September 2025 07:14:21 +0000 (0:00:01.039) 0:00:28.665 ****** 2025-09-19 07:16:37.319870 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:16:37.319879 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:16:37.319889 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:16:37.319898 | orchestrator | 2025-09-19 07:16:37.319907 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-19 07:16:37.319917 | orchestrator | Friday 19 September 2025 07:14:21 +0000 (0:00:00.291) 0:00:28.956 ****** 2025-09-19 07:16:37.319927 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 07:16:37.319936 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 07:16:37.319946 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 07:16:37.319955 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 07:16:37.319965 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 07:16:37.319980 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 07:16:37.319990 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 07:16:37.320000 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 07:16:37.320014 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 07:16:37.320024 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 07:16:37.320034 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 07:16:37.320043 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 07:16:37.320052 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 07:16:37.320062 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 07:16:37.320072 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 07:16:37.320081 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 07:16:37.320091 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 07:16:37.320100 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 07:16:37.320110 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 07:16:37.320120 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 07:16:37.320129 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 07:16:37.320138 | orchestrator | 2025-09-19 07:16:37.320148 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-19 07:16:37.320158 | orchestrator | Friday 19 September 2025 07:14:30 +0000 (0:00:08.895) 0:00:37.851 ****** 2025-09-19 07:16:37.320167 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 07:16:37.320180 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 07:16:37.320190 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 07:16:37.320200 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 07:16:37.320209 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 07:16:37.320218 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 07:16:37.320228 | orchestrator | 2025-09-19 07:16:37.320237 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-19 07:16:37.320247 | orchestrator | Friday 19 September 2025 07:14:33 +0000 (0:00:03.182) 0:00:41.034 ****** 2025-09-19 07:16:37.320257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.320274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.320291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 07:16:37.320305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:16:37.320316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:16:37.320326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 07:16:37.320336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.320356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.320366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 07:16:37.320376 | orchestrator | 2025-09-19 07:16:37.320386 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 07:16:37.320395 | orchestrator | Friday 19 September 2025 07:14:35 +0000 (0:00:02.447) 0:00:43.481 ****** 2025-09-19 07:16:37.320405 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:16:37.320414 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:16:37.320424 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:16:37.320433 | orchestrator | 2025-09-19 07:16:37.320443 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-19 07:16:37.320453 | orchestrator | Friday 19 September 2025 07:14:36 +0000 (0:00:00.299) 0:00:43.781 ****** 2025-09-19 07:16:37.320462 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:16:37.320472 | orchestrator | 2025-09-19 07:16:37.320481 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-19 07:16:37.320491 | orchestrator | Friday 19 September 2025 07:14:38 +0000 (0:00:02.313) 0:00:46.094 ****** 2025-09-19 07:16:37.320500 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:16:37.320510 | orchestrator | 2025-09-19 07:16:37.320519 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-19 07:16:37.320529 | orchestrator | Friday 19 September 2025 07:14:40 +0000 (0:00:01.974) 0:00:48.069 ****** 2025-09-19 07:16:37.320538 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:16:37.320548 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:16:37.320558 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:16:37.320567 | orchestrator | 2025-09-19 07:16:37.320580 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-19 07:16:37.320590 | orchestrator | Friday 19 September 2025 07:14:41 +0000 (0:00:00.764) 0:00:48.833 ****** 2025-09-19 07:16:37.320599 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:16:37.320609 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:16:37.320618 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:16:37.320627 | orchestrator | 2025-09-19 07:16:37.320637 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-19 07:16:37.320646 | orchestrator | Friday 19 September 2025 07:14:41 +0000 (0:00:00.579) 0:00:49.412 ****** 2025-09-19 07:16:37.320656 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:16:37.320665 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:16:37.320680 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:16:37.320690 | orchestrator | 2025-09-19 07:16:37.320699 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-19 07:16:37.320731 | orchestrator | Friday 19 September 2025 07:14:42 +0000 (0:00:00.481) 0:00:49.893 ****** 2025-09-19 07:16:37.320742 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:16:37.320751 | orchestrator | 2025-09-19 07:16:37.320761 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-19 07:16:37.320770 | orchestrator | Friday 19 September 2025 07:14:54 +0000 (0:00:12.426) 0:01:02.320 ****** 2025-09-19 07:16:37.320780 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:16:37.320789 | orchestrator | 2025-09-19 07:16:37.320799 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 07:16:37.320808 | orchestrator | Friday 19 September 2025 07:15:04 +0000 (0:00:09.654) 0:01:11.974 ****** 2025-09-19 07:16:37.320817 | orchestrator | 2025-09-19 07:16:37.320827 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 07:16:37.320836 | orchestrator | Friday 19 September 2025 07:15:04 +0000 (0:00:00.062) 0:01:12.037 ****** 2025-09-19 07:16:37.320846 | orchestrator | 2025-09-19 07:16:37.320855 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 07:16:37.320864 | orchestrator | Friday 19 September 2025 07:15:04 +0000 (0:00:00.075) 0:01:12.112 ****** 2025-09-19 07:16:37.320874 | orchestrator | 2025-09-19 07:16:37.320883 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-19 07:16:37.320893 | orchestrator | Friday 19 September 2025 07:15:04 +0000 (0:00:00.067) 0:01:12.179 ****** 2025-09-19 07:16:37.320902 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:16:37.320911 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:16:37.320921 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:16:37.320931 | orchestrator | 2025-09-19 07:16:37.320940 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-19 07:16:37.320949 | orchestrator | Friday 19 September 2025 07:15:30 +0000 (0:00:26.052) 0:01:38.232 ****** 2025-09-19 07:16:37.320959 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:16:37.320968 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:16:37.320978 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:16:37.320987 | orchestrator | 2025-09-19 07:16:37.320997 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-19 07:16:37.321006 | orchestrator | Friday 19 September 2025 07:15:40 +0000 (0:00:10.192) 0:01:48.425 ****** 2025-09-19 07:16:37.321016 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:16:37.321026 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:16:37.321041 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:16:37.321050 | orchestrator | 2025-09-19 07:16:37.321060 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 07:16:37.321069 | orchestrator | Friday 19 September 2025 07:15:48 +0000 (0:00:07.529) 0:01:55.954 ****** 2025-09-19 07:16:37.321079 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:16:37.321088 | orchestrator | 2025-09-19 07:16:37.321098 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-19 07:16:37.321107 | orchestrator | Friday 19 September 2025 07:15:49 +0000 (0:00:00.769) 0:01:56.723 ****** 2025-09-19 07:16:37.321117 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:16:37.321127 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:16:37.321136 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:16:37.321145 | orchestrator | 2025-09-19 07:16:37.321155 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-19 07:16:37.321165 | orchestrator | Friday 19 September 2025 07:15:49 +0000 (0:00:00.827) 0:01:57.551 ****** 2025-09-19 07:16:37.321174 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:16:37.321184 | orchestrator | 2025-09-19 07:16:37.321193 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-19 07:16:37.321208 | orchestrator | Friday 19 September 2025 07:15:51 +0000 (0:00:01.805) 0:01:59.356 ****** 2025-09-19 07:16:37.321218 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-19 07:16:37.321227 | orchestrator | 2025-09-19 07:16:37.321237 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-19 07:16:37.321246 | orchestrator | Friday 19 September 2025 07:16:01 +0000 (0:00:10.216) 0:02:09.573 ****** 2025-09-19 07:16:37.321256 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-19 07:16:37.321265 | orchestrator | 2025-09-19 07:16:37.321275 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-19 07:16:37.321284 | orchestrator | Friday 19 September 2025 07:16:21 +0000 (0:00:19.594) 0:02:29.167 ****** 2025-09-19 07:16:37.321294 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-19 07:16:37.321303 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-19 07:16:37.321313 | orchestrator | 2025-09-19 07:16:37.321322 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-19 07:16:37.321332 | orchestrator | Friday 19 September 2025 07:16:28 +0000 (0:00:07.215) 0:02:36.383 ****** 2025-09-19 07:16:37.321341 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:16:37.321351 | orchestrator | 2025-09-19 07:16:37.321360 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-19 07:16:37.321374 | orchestrator | Friday 19 September 2025 07:16:29 +0000 (0:00:00.410) 0:02:36.794 ****** 2025-09-19 07:16:37.321383 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:16:37.321393 | orchestrator | 2025-09-19 07:16:37.321403 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-19 07:16:37.321412 | orchestrator | Friday 19 September 2025 07:16:29 +0000 (0:00:00.210) 0:02:37.004 ****** 2025-09-19 07:16:37.321422 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:16:37.321431 | orchestrator | 2025-09-19 07:16:37.321441 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-19 07:16:37.321450 | orchestrator | Friday 19 September 2025 07:16:29 +0000 (0:00:00.328) 0:02:37.332 ****** 2025-09-19 07:16:37.321460 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:16:37.321469 | orchestrator | 2025-09-19 07:16:37.321479 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-19 07:16:37.321488 | orchestrator | Friday 19 September 2025 07:16:30 +0000 (0:00:00.823) 0:02:38.156 ****** 2025-09-19 07:16:37.321498 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:16:37.321507 | orchestrator | 2025-09-19 07:16:37.321517 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 07:16:37.321526 | orchestrator | Friday 19 September 2025 07:16:34 +0000 (0:00:03.509) 0:02:41.666 ****** 2025-09-19 07:16:37.321536 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:16:37.321545 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:16:37.321555 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:16:37.321564 | orchestrator | 2025-09-19 07:16:37.321574 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:16:37.321583 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-19 07:16:37.321594 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-19 07:16:37.321603 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-19 07:16:37.321613 | orchestrator | 2025-09-19 07:16:37.321622 | orchestrator | 2025-09-19 07:16:37.321632 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:16:37.321641 | orchestrator | Friday 19 September 2025 07:16:34 +0000 (0:00:00.924) 0:02:42.590 ****** 2025-09-19 07:16:37.321651 | orchestrator | =============================================================================== 2025-09-19 07:16:37.321665 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 26.05s 2025-09-19 07:16:37.321675 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.59s 2025-09-19 07:16:37.321684 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.43s 2025-09-19 07:16:37.321693 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.22s 2025-09-19 07:16:37.321703 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.19s 2025-09-19 07:16:37.321755 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.65s 2025-09-19 07:16:37.321766 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.90s 2025-09-19 07:16:37.321775 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.53s 2025-09-19 07:16:37.321785 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.22s 2025-09-19 07:16:37.321794 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.36s 2025-09-19 07:16:37.321804 | orchestrator | keystone : Creating default user role ----------------------------------- 3.51s 2025-09-19 07:16:37.321813 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.20s 2025-09-19 07:16:37.321823 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.19s 2025-09-19 07:16:37.321832 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.18s 2025-09-19 07:16:37.321842 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.45s 2025-09-19 07:16:37.321851 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.36s 2025-09-19 07:16:37.321861 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.31s 2025-09-19 07:16:37.321870 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 1.97s 2025-09-19 07:16:37.321880 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.81s 2025-09-19 07:16:37.321890 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.68s 2025-09-19 07:16:37.321899 | orchestrator | 2025-09-19 07:16:37 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:16:37.321909 | orchestrator | 2025-09-19 07:16:37 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:16:37.321919 | orchestrator | 2025-09-19 07:16:37 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:16:37.321928 | orchestrator | 2025-09-19 07:16:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:40.351832 | orchestrator | 2025-09-19 07:16:40 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:16:40.353602 | orchestrator | 2025-09-19 07:16:40 | INFO  | Task ccde68b7-e1bf-4ccf-b892-5ce2c2931b79 is in state STARTED 2025-09-19 07:16:40.355622 | orchestrator | 2025-09-19 07:16:40 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:16:40.357085 | orchestrator | 2025-09-19 07:16:40 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:16:40.358453 | orchestrator | 2025-09-19 07:16:40 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:16:40.358897 | orchestrator | 2025-09-19 07:16:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:43.391272 | orchestrator | 2025-09-19 07:16:43 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:16:43.391775 | orchestrator | 2025-09-19 07:16:43 | INFO  | Task ccde68b7-e1bf-4ccf-b892-5ce2c2931b79 is in state STARTED 2025-09-19 07:16:43.392547 | orchestrator | 2025-09-19 07:16:43 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:16:43.393634 | orchestrator | 2025-09-19 07:16:43 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:16:43.394674 | orchestrator | 2025-09-19 07:16:43 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:16:43.394697 | orchestrator | 2025-09-19 07:16:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:46.485200 | orchestrator | 2025-09-19 07:16:46 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:16:46.485305 | orchestrator | 2025-09-19 07:16:46 | INFO  | Task ccde68b7-e1bf-4ccf-b892-5ce2c2931b79 is in state STARTED 2025-09-19 07:16:46.487906 | orchestrator | 2025-09-19 07:16:46 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:16:46.488359 | orchestrator | 2025-09-19 07:16:46 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:16:46.489389 | orchestrator | 2025-09-19 07:16:46 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:16:46.489411 | orchestrator | 2025-09-19 07:16:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:49.538427 | orchestrator | 2025-09-19 07:16:49 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:16:49.539344 | orchestrator | 2025-09-19 07:16:49 | INFO  | Task ccde68b7-e1bf-4ccf-b892-5ce2c2931b79 is in state STARTED 2025-09-19 07:16:49.540838 | orchestrator | 2025-09-19 07:16:49 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:16:49.542362 | orchestrator | 2025-09-19 07:16:49 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:16:49.545433 | orchestrator | 2025-09-19 07:16:49 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:16:49.546077 | orchestrator | 2025-09-19 07:16:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:52.584255 | orchestrator | 2025-09-19 07:16:52 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:16:52.584754 | orchestrator | 2025-09-19 07:16:52 | INFO  | Task ccde68b7-e1bf-4ccf-b892-5ce2c2931b79 is in state STARTED 2025-09-19 07:16:52.585881 | orchestrator | 2025-09-19 07:16:52 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:16:52.587125 | orchestrator | 2025-09-19 07:16:52 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:16:52.588113 | orchestrator | 2025-09-19 07:16:52 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:16:52.588129 | orchestrator | 2025-09-19 07:16:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:55.617815 | orchestrator | 2025-09-19 07:16:55 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:16:55.617912 | orchestrator | 2025-09-19 07:16:55 | INFO  | Task ccde68b7-e1bf-4ccf-b892-5ce2c2931b79 is in state STARTED 2025-09-19 07:16:55.618351 | orchestrator | 2025-09-19 07:16:55 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:16:55.619172 | orchestrator | 2025-09-19 07:16:55 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:16:55.619843 | orchestrator | 2025-09-19 07:16:55 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:16:55.619872 | orchestrator | 2025-09-19 07:16:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:16:58.645090 | orchestrator | 2025-09-19 07:16:58 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:16:58.645344 | orchestrator | 2025-09-19 07:16:58 | INFO  | Task ccde68b7-e1bf-4ccf-b892-5ce2c2931b79 is in state STARTED 2025-09-19 07:16:58.645888 | orchestrator | 2025-09-19 07:16:58 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:16:58.646461 | orchestrator | 2025-09-19 07:16:58 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:16:58.647163 | orchestrator | 2025-09-19 07:16:58 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:16:58.647190 | orchestrator | 2025-09-19 07:16:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:01.708227 | orchestrator | 2025-09-19 07:17:01 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:01.708313 | orchestrator | 2025-09-19 07:17:01 | INFO  | Task ccde68b7-e1bf-4ccf-b892-5ce2c2931b79 is in state STARTED 2025-09-19 07:17:01.708327 | orchestrator | 2025-09-19 07:17:01 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:01.708338 | orchestrator | 2025-09-19 07:17:01 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:01.708349 | orchestrator | 2025-09-19 07:17:01 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:01.708360 | orchestrator | 2025-09-19 07:17:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:04.739140 | orchestrator | 2025-09-19 07:17:04 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:04.740233 | orchestrator | 2025-09-19 07:17:04 | INFO  | Task ccde68b7-e1bf-4ccf-b892-5ce2c2931b79 is in state STARTED 2025-09-19 07:17:04.740435 | orchestrator | 2025-09-19 07:17:04 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:04.741562 | orchestrator | 2025-09-19 07:17:04 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:04.742311 | orchestrator | 2025-09-19 07:17:04 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:04.742434 | orchestrator | 2025-09-19 07:17:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:07.770274 | orchestrator | 2025-09-19 07:17:07 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:07.771661 | orchestrator | 2025-09-19 07:17:07 | INFO  | Task ccde68b7-e1bf-4ccf-b892-5ce2c2931b79 is in state SUCCESS 2025-09-19 07:17:07.773615 | orchestrator | 2025-09-19 07:17:07 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:07.775276 | orchestrator | 2025-09-19 07:17:07 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:07.776795 | orchestrator | 2025-09-19 07:17:07 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:07.776900 | orchestrator | 2025-09-19 07:17:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:10.810267 | orchestrator | 2025-09-19 07:17:10 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:10.810360 | orchestrator | 2025-09-19 07:17:10 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:10.811470 | orchestrator | 2025-09-19 07:17:10 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:10.814431 | orchestrator | 2025-09-19 07:17:10 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:10.814797 | orchestrator | 2025-09-19 07:17:10 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:10.814862 | orchestrator | 2025-09-19 07:17:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:13.853583 | orchestrator | 2025-09-19 07:17:13 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:13.854146 | orchestrator | 2025-09-19 07:17:13 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:13.854871 | orchestrator | 2025-09-19 07:17:13 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:13.855621 | orchestrator | 2025-09-19 07:17:13 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:13.857667 | orchestrator | 2025-09-19 07:17:13 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:13.857691 | orchestrator | 2025-09-19 07:17:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:16.890079 | orchestrator | 2025-09-19 07:17:16 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:16.890199 | orchestrator | 2025-09-19 07:17:16 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:16.890225 | orchestrator | 2025-09-19 07:17:16 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:16.890686 | orchestrator | 2025-09-19 07:17:16 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:16.891245 | orchestrator | 2025-09-19 07:17:16 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:16.891268 | orchestrator | 2025-09-19 07:17:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:19.922865 | orchestrator | 2025-09-19 07:17:19 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:19.922954 | orchestrator | 2025-09-19 07:17:19 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:19.922969 | orchestrator | 2025-09-19 07:17:19 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:19.923376 | orchestrator | 2025-09-19 07:17:19 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:19.923543 | orchestrator | 2025-09-19 07:17:19 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:19.923787 | orchestrator | 2025-09-19 07:17:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:22.950545 | orchestrator | 2025-09-19 07:17:22 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:22.950633 | orchestrator | 2025-09-19 07:17:22 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:22.951013 | orchestrator | 2025-09-19 07:17:22 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:22.951773 | orchestrator | 2025-09-19 07:17:22 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:22.951954 | orchestrator | 2025-09-19 07:17:22 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:22.951976 | orchestrator | 2025-09-19 07:17:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:25.978084 | orchestrator | 2025-09-19 07:17:25 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:25.978160 | orchestrator | 2025-09-19 07:17:25 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:25.980588 | orchestrator | 2025-09-19 07:17:25 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:25.981155 | orchestrator | 2025-09-19 07:17:25 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:25.981739 | orchestrator | 2025-09-19 07:17:25 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:25.981755 | orchestrator | 2025-09-19 07:17:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:29.025352 | orchestrator | 2025-09-19 07:17:29 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:29.025471 | orchestrator | 2025-09-19 07:17:29 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:29.025946 | orchestrator | 2025-09-19 07:17:29 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:29.026564 | orchestrator | 2025-09-19 07:17:29 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:29.027368 | orchestrator | 2025-09-19 07:17:29 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:29.027401 | orchestrator | 2025-09-19 07:17:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:32.051050 | orchestrator | 2025-09-19 07:17:32 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:32.051189 | orchestrator | 2025-09-19 07:17:32 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:32.051850 | orchestrator | 2025-09-19 07:17:32 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:32.052758 | orchestrator | 2025-09-19 07:17:32 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:32.053198 | orchestrator | 2025-09-19 07:17:32 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:32.053227 | orchestrator | 2025-09-19 07:17:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:35.126147 | orchestrator | 2025-09-19 07:17:35 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:35.126237 | orchestrator | 2025-09-19 07:17:35 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:35.126882 | orchestrator | 2025-09-19 07:17:35 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:35.127352 | orchestrator | 2025-09-19 07:17:35 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:35.129235 | orchestrator | 2025-09-19 07:17:35 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:35.129265 | orchestrator | 2025-09-19 07:17:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:38.156993 | orchestrator | 2025-09-19 07:17:38 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:38.157081 | orchestrator | 2025-09-19 07:17:38 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:38.157095 | orchestrator | 2025-09-19 07:17:38 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:38.157107 | orchestrator | 2025-09-19 07:17:38 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:38.158366 | orchestrator | 2025-09-19 07:17:38 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:38.158404 | orchestrator | 2025-09-19 07:17:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:41.183217 | orchestrator | 2025-09-19 07:17:41 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:41.183309 | orchestrator | 2025-09-19 07:17:41 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:41.183889 | orchestrator | 2025-09-19 07:17:41 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:41.184534 | orchestrator | 2025-09-19 07:17:41 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:41.187186 | orchestrator | 2025-09-19 07:17:41 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:41.187259 | orchestrator | 2025-09-19 07:17:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:44.213270 | orchestrator | 2025-09-19 07:17:44 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:44.213462 | orchestrator | 2025-09-19 07:17:44 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:44.214266 | orchestrator | 2025-09-19 07:17:44 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:44.215599 | orchestrator | 2025-09-19 07:17:44 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:44.216063 | orchestrator | 2025-09-19 07:17:44 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:44.216096 | orchestrator | 2025-09-19 07:17:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:47.270447 | orchestrator | 2025-09-19 07:17:47 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:47.270533 | orchestrator | 2025-09-19 07:17:47 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:47.270547 | orchestrator | 2025-09-19 07:17:47 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:47.270559 | orchestrator | 2025-09-19 07:17:47 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:47.270570 | orchestrator | 2025-09-19 07:17:47 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:47.270581 | orchestrator | 2025-09-19 07:17:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:50.291330 | orchestrator | 2025-09-19 07:17:50 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:50.291684 | orchestrator | 2025-09-19 07:17:50 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:50.292763 | orchestrator | 2025-09-19 07:17:50 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:50.293445 | orchestrator | 2025-09-19 07:17:50 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:50.294881 | orchestrator | 2025-09-19 07:17:50 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state STARTED 2025-09-19 07:17:50.294922 | orchestrator | 2025-09-19 07:17:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:53.320883 | orchestrator | 2025-09-19 07:17:53 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:53.321189 | orchestrator | 2025-09-19 07:17:53 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:53.321735 | orchestrator | 2025-09-19 07:17:53 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:53.322284 | orchestrator | 2025-09-19 07:17:53 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:53.322936 | orchestrator | 2025-09-19 07:17:53 | INFO  | Task 12e987fa-bac5-4bf8-b874-9032a3a062e8 is in state SUCCESS 2025-09-19 07:17:53.323243 | orchestrator | 2025-09-19 07:17:53.323269 | orchestrator | 2025-09-19 07:17:53.323282 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:17:53.323316 | orchestrator | 2025-09-19 07:17:53.323328 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:17:53.323339 | orchestrator | Friday 19 September 2025 07:16:31 +0000 (0:00:00.307) 0:00:00.307 ****** 2025-09-19 07:17:53.323350 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:17:53.323361 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:17:53.323372 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:17:53.323382 | orchestrator | ok: [testbed-manager] 2025-09-19 07:17:53.323393 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:17:53.323404 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:17:53.323415 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:17:53.323425 | orchestrator | 2025-09-19 07:17:53.323436 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:17:53.323447 | orchestrator | Friday 19 September 2025 07:16:32 +0000 (0:00:00.978) 0:00:01.286 ****** 2025-09-19 07:17:53.323457 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-19 07:17:53.323468 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-19 07:17:53.323479 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-19 07:17:53.323490 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-19 07:17:53.323500 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-19 07:17:53.323511 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-19 07:17:53.323521 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-19 07:17:53.323531 | orchestrator | 2025-09-19 07:17:53.323542 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-19 07:17:53.323553 | orchestrator | 2025-09-19 07:17:53.323564 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-19 07:17:53.323574 | orchestrator | Friday 19 September 2025 07:16:33 +0000 (0:00:00.822) 0:00:02.108 ****** 2025-09-19 07:17:53.323586 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:17:53.323597 | orchestrator | 2025-09-19 07:17:53.323608 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-19 07:17:53.323618 | orchestrator | Friday 19 September 2025 07:16:35 +0000 (0:00:02.558) 0:00:04.667 ****** 2025-09-19 07:17:53.323629 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-19 07:17:53.323639 | orchestrator | 2025-09-19 07:17:53.323650 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-19 07:17:53.323660 | orchestrator | Friday 19 September 2025 07:16:39 +0000 (0:00:04.150) 0:00:08.817 ****** 2025-09-19 07:17:53.323671 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-19 07:17:53.323683 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-19 07:17:53.323693 | orchestrator | 2025-09-19 07:17:53.323728 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-19 07:17:53.323739 | orchestrator | Friday 19 September 2025 07:16:46 +0000 (0:00:06.839) 0:00:15.656 ****** 2025-09-19 07:17:53.323750 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:17:53.323760 | orchestrator | 2025-09-19 07:17:53.323772 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-19 07:17:53.323782 | orchestrator | Friday 19 September 2025 07:16:50 +0000 (0:00:03.783) 0:00:19.440 ****** 2025-09-19 07:17:53.323793 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:17:53.323804 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-19 07:17:53.323814 | orchestrator | 2025-09-19 07:17:53.323825 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-19 07:17:53.323835 | orchestrator | Friday 19 September 2025 07:16:54 +0000 (0:00:04.247) 0:00:23.687 ****** 2025-09-19 07:17:53.323853 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:17:53.323867 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-19 07:17:53.323878 | orchestrator | 2025-09-19 07:17:53.323890 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-19 07:17:53.323902 | orchestrator | Friday 19 September 2025 07:17:01 +0000 (0:00:06.766) 0:00:30.454 ****** 2025-09-19 07:17:53.323914 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-19 07:17:53.323927 | orchestrator | 2025-09-19 07:17:53.323939 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:17:53.323951 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:17:53.323975 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:17:53.323989 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:17:53.324001 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:17:53.324014 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:17:53.324044 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:17:53.324060 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:17:53.324072 | orchestrator | 2025-09-19 07:17:53.324084 | orchestrator | 2025-09-19 07:17:53.324096 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:17:53.324109 | orchestrator | Friday 19 September 2025 07:17:06 +0000 (0:00:05.089) 0:00:35.544 ****** 2025-09-19 07:17:53.324121 | orchestrator | =============================================================================== 2025-09-19 07:17:53.324134 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.84s 2025-09-19 07:17:53.324146 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.77s 2025-09-19 07:17:53.324158 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.09s 2025-09-19 07:17:53.324170 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.25s 2025-09-19 07:17:53.324182 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.15s 2025-09-19 07:17:53.324194 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.78s 2025-09-19 07:17:53.324206 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.56s 2025-09-19 07:17:53.324218 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.98s 2025-09-19 07:17:53.324231 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2025-09-19 07:17:53.324243 | orchestrator | 2025-09-19 07:17:53.324256 | orchestrator | 2025-09-19 07:17:53.324275 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-19 07:17:53.324293 | orchestrator | 2025-09-19 07:17:53.324312 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-19 07:17:53.324331 | orchestrator | Friday 19 September 2025 07:16:24 +0000 (0:00:00.273) 0:00:00.273 ****** 2025-09-19 07:17:53.324343 | orchestrator | changed: [testbed-manager] 2025-09-19 07:17:53.324353 | orchestrator | 2025-09-19 07:17:53.324364 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-19 07:17:53.324375 | orchestrator | Friday 19 September 2025 07:16:26 +0000 (0:00:01.647) 0:00:01.921 ****** 2025-09-19 07:17:53.324385 | orchestrator | changed: [testbed-manager] 2025-09-19 07:17:53.324404 | orchestrator | 2025-09-19 07:17:53.324415 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-19 07:17:53.324425 | orchestrator | Friday 19 September 2025 07:16:27 +0000 (0:00:00.890) 0:00:02.811 ****** 2025-09-19 07:17:53.324436 | orchestrator | changed: [testbed-manager] 2025-09-19 07:17:53.324447 | orchestrator | 2025-09-19 07:17:53.324457 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-19 07:17:53.324468 | orchestrator | Friday 19 September 2025 07:16:28 +0000 (0:00:01.132) 0:00:03.943 ****** 2025-09-19 07:17:53.324479 | orchestrator | changed: [testbed-manager] 2025-09-19 07:17:53.324490 | orchestrator | 2025-09-19 07:17:53.324500 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-19 07:17:53.324511 | orchestrator | Friday 19 September 2025 07:16:29 +0000 (0:00:01.679) 0:00:05.623 ****** 2025-09-19 07:17:53.324522 | orchestrator | changed: [testbed-manager] 2025-09-19 07:17:53.324532 | orchestrator | 2025-09-19 07:17:53.324543 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-19 07:17:53.324553 | orchestrator | Friday 19 September 2025 07:16:31 +0000 (0:00:01.377) 0:00:07.001 ****** 2025-09-19 07:17:53.324564 | orchestrator | changed: [testbed-manager] 2025-09-19 07:17:53.324575 | orchestrator | 2025-09-19 07:17:53.324586 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-19 07:17:53.324597 | orchestrator | Friday 19 September 2025 07:16:32 +0000 (0:00:01.036) 0:00:08.037 ****** 2025-09-19 07:17:53.324607 | orchestrator | changed: [testbed-manager] 2025-09-19 07:17:53.324618 | orchestrator | 2025-09-19 07:17:53.324628 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-19 07:17:53.324639 | orchestrator | Friday 19 September 2025 07:16:33 +0000 (0:00:01.012) 0:00:09.050 ****** 2025-09-19 07:17:53.324650 | orchestrator | changed: [testbed-manager] 2025-09-19 07:17:53.324660 | orchestrator | 2025-09-19 07:17:53.324671 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-19 07:17:53.324682 | orchestrator | Friday 19 September 2025 07:16:34 +0000 (0:00:01.077) 0:00:10.128 ****** 2025-09-19 07:17:53.324692 | orchestrator | changed: [testbed-manager] 2025-09-19 07:17:53.324722 | orchestrator | 2025-09-19 07:17:53.324733 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-19 07:17:53.324744 | orchestrator | Friday 19 September 2025 07:17:27 +0000 (0:00:53.358) 0:01:03.486 ****** 2025-09-19 07:17:53.324754 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:17:53.324765 | orchestrator | 2025-09-19 07:17:53.324776 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 07:17:53.324787 | orchestrator | 2025-09-19 07:17:53.324797 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 07:17:53.324814 | orchestrator | Friday 19 September 2025 07:17:27 +0000 (0:00:00.129) 0:01:03.615 ****** 2025-09-19 07:17:53.324825 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:17:53.324836 | orchestrator | 2025-09-19 07:17:53.324847 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 07:17:53.324857 | orchestrator | 2025-09-19 07:17:53.324881 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 07:17:53.324901 | orchestrator | Friday 19 September 2025 07:17:39 +0000 (0:00:11.658) 0:01:15.274 ****** 2025-09-19 07:17:53.324912 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:17:53.324923 | orchestrator | 2025-09-19 07:17:53.324934 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 07:17:53.324945 | orchestrator | 2025-09-19 07:17:53.324955 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 07:17:53.324966 | orchestrator | Friday 19 September 2025 07:17:50 +0000 (0:00:11.280) 0:01:26.554 ****** 2025-09-19 07:17:53.324977 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:17:53.324988 | orchestrator | 2025-09-19 07:17:53.325006 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:17:53.325017 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 07:17:53.325035 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:17:53.325046 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:17:53.325057 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:17:53.325068 | orchestrator | 2025-09-19 07:17:53.325079 | orchestrator | 2025-09-19 07:17:53.325089 | orchestrator | 2025-09-19 07:17:53.325100 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:17:53.325111 | orchestrator | Friday 19 September 2025 07:17:51 +0000 (0:00:01.185) 0:01:27.740 ****** 2025-09-19 07:17:53.325121 | orchestrator | =============================================================================== 2025-09-19 07:17:53.325132 | orchestrator | Create admin user ------------------------------------------------------ 53.36s 2025-09-19 07:17:53.325143 | orchestrator | Restart ceph manager service ------------------------------------------- 24.12s 2025-09-19 07:17:53.325153 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.68s 2025-09-19 07:17:53.325164 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.65s 2025-09-19 07:17:53.325175 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.38s 2025-09-19 07:17:53.325185 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.13s 2025-09-19 07:17:53.325196 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.08s 2025-09-19 07:17:53.325206 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.04s 2025-09-19 07:17:53.325217 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.01s 2025-09-19 07:17:53.325228 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.89s 2025-09-19 07:17:53.325238 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2025-09-19 07:17:53.325249 | orchestrator | 2025-09-19 07:17:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:56.350326 | orchestrator | 2025-09-19 07:17:56 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:56.350893 | orchestrator | 2025-09-19 07:17:56 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:56.352082 | orchestrator | 2025-09-19 07:17:56 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:56.352573 | orchestrator | 2025-09-19 07:17:56 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:56.352873 | orchestrator | 2025-09-19 07:17:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:17:59.383844 | orchestrator | 2025-09-19 07:17:59 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:17:59.384070 | orchestrator | 2025-09-19 07:17:59 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:17:59.384727 | orchestrator | 2025-09-19 07:17:59 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:17:59.385282 | orchestrator | 2025-09-19 07:17:59 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:17:59.385386 | orchestrator | 2025-09-19 07:17:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:02.414631 | orchestrator | 2025-09-19 07:18:02 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:02.478870 | orchestrator | 2025-09-19 07:18:02 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:02.478940 | orchestrator | 2025-09-19 07:18:02 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:02.478958 | orchestrator | 2025-09-19 07:18:02 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:02.478965 | orchestrator | 2025-09-19 07:18:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:05.441662 | orchestrator | 2025-09-19 07:18:05 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:05.443180 | orchestrator | 2025-09-19 07:18:05 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:05.443901 | orchestrator | 2025-09-19 07:18:05 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:05.445173 | orchestrator | 2025-09-19 07:18:05 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:05.445208 | orchestrator | 2025-09-19 07:18:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:08.536893 | orchestrator | 2025-09-19 07:18:08 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:08.538454 | orchestrator | 2025-09-19 07:18:08 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:08.539415 | orchestrator | 2025-09-19 07:18:08 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:08.540650 | orchestrator | 2025-09-19 07:18:08 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:08.540671 | orchestrator | 2025-09-19 07:18:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:11.581039 | orchestrator | 2025-09-19 07:18:11 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:11.582227 | orchestrator | 2025-09-19 07:18:11 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:11.583643 | orchestrator | 2025-09-19 07:18:11 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:11.586443 | orchestrator | 2025-09-19 07:18:11 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:11.586508 | orchestrator | 2025-09-19 07:18:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:14.631276 | orchestrator | 2025-09-19 07:18:14 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:14.632109 | orchestrator | 2025-09-19 07:18:14 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:14.634980 | orchestrator | 2025-09-19 07:18:14 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:14.637359 | orchestrator | 2025-09-19 07:18:14 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:14.637383 | orchestrator | 2025-09-19 07:18:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:17.689533 | orchestrator | 2025-09-19 07:18:17 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:17.691028 | orchestrator | 2025-09-19 07:18:17 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:17.691629 | orchestrator | 2025-09-19 07:18:17 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:17.693270 | orchestrator | 2025-09-19 07:18:17 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:17.693294 | orchestrator | 2025-09-19 07:18:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:20.734496 | orchestrator | 2025-09-19 07:18:20 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:20.734627 | orchestrator | 2025-09-19 07:18:20 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:20.735237 | orchestrator | 2025-09-19 07:18:20 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:20.736332 | orchestrator | 2025-09-19 07:18:20 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:20.736375 | orchestrator | 2025-09-19 07:18:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:23.779020 | orchestrator | 2025-09-19 07:18:23 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:23.779536 | orchestrator | 2025-09-19 07:18:23 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:23.780560 | orchestrator | 2025-09-19 07:18:23 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:23.781663 | orchestrator | 2025-09-19 07:18:23 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:23.781689 | orchestrator | 2025-09-19 07:18:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:26.817931 | orchestrator | 2025-09-19 07:18:26 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:26.818010 | orchestrator | 2025-09-19 07:18:26 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:26.818545 | orchestrator | 2025-09-19 07:18:26 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:26.819179 | orchestrator | 2025-09-19 07:18:26 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:26.819218 | orchestrator | 2025-09-19 07:18:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:29.858618 | orchestrator | 2025-09-19 07:18:29 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:29.860495 | orchestrator | 2025-09-19 07:18:29 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:29.862641 | orchestrator | 2025-09-19 07:18:29 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:29.864578 | orchestrator | 2025-09-19 07:18:29 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:29.865062 | orchestrator | 2025-09-19 07:18:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:32.900148 | orchestrator | 2025-09-19 07:18:32 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:32.900222 | orchestrator | 2025-09-19 07:18:32 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:32.903164 | orchestrator | 2025-09-19 07:18:32 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:32.904859 | orchestrator | 2025-09-19 07:18:32 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:32.904889 | orchestrator | 2025-09-19 07:18:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:35.960072 | orchestrator | 2025-09-19 07:18:35 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:35.961775 | orchestrator | 2025-09-19 07:18:35 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:35.963895 | orchestrator | 2025-09-19 07:18:35 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:35.965512 | orchestrator | 2025-09-19 07:18:35 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:35.965577 | orchestrator | 2025-09-19 07:18:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:39.009616 | orchestrator | 2025-09-19 07:18:39 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:39.010685 | orchestrator | 2025-09-19 07:18:39 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:39.012513 | orchestrator | 2025-09-19 07:18:39 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:39.014328 | orchestrator | 2025-09-19 07:18:39 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:39.014358 | orchestrator | 2025-09-19 07:18:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:42.065386 | orchestrator | 2025-09-19 07:18:42 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:42.067922 | orchestrator | 2025-09-19 07:18:42 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:42.070879 | orchestrator | 2025-09-19 07:18:42 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:42.073091 | orchestrator | 2025-09-19 07:18:42 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:42.073115 | orchestrator | 2025-09-19 07:18:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:45.113846 | orchestrator | 2025-09-19 07:18:45 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:45.115366 | orchestrator | 2025-09-19 07:18:45 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:45.120733 | orchestrator | 2025-09-19 07:18:45 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:45.122617 | orchestrator | 2025-09-19 07:18:45 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:45.122663 | orchestrator | 2025-09-19 07:18:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:48.165960 | orchestrator | 2025-09-19 07:18:48 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:48.166782 | orchestrator | 2025-09-19 07:18:48 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:48.168527 | orchestrator | 2025-09-19 07:18:48 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:48.169031 | orchestrator | 2025-09-19 07:18:48 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:48.169054 | orchestrator | 2025-09-19 07:18:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:51.211439 | orchestrator | 2025-09-19 07:18:51 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:51.211545 | orchestrator | 2025-09-19 07:18:51 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:51.215811 | orchestrator | 2025-09-19 07:18:51 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:51.218217 | orchestrator | 2025-09-19 07:18:51 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:51.218243 | orchestrator | 2025-09-19 07:18:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:54.253129 | orchestrator | 2025-09-19 07:18:54 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:54.254436 | orchestrator | 2025-09-19 07:18:54 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:54.255772 | orchestrator | 2025-09-19 07:18:54 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:54.257734 | orchestrator | 2025-09-19 07:18:54 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:54.257758 | orchestrator | 2025-09-19 07:18:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:18:57.288829 | orchestrator | 2025-09-19 07:18:57 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:18:57.289340 | orchestrator | 2025-09-19 07:18:57 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:18:57.290363 | orchestrator | 2025-09-19 07:18:57 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:18:57.291825 | orchestrator | 2025-09-19 07:18:57 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:18:57.291851 | orchestrator | 2025-09-19 07:18:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:00.324823 | orchestrator | 2025-09-19 07:19:00 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:19:00.326262 | orchestrator | 2025-09-19 07:19:00 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:00.328000 | orchestrator | 2025-09-19 07:19:00 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:19:00.329836 | orchestrator | 2025-09-19 07:19:00 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:00.330093 | orchestrator | 2025-09-19 07:19:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:03.380520 | orchestrator | 2025-09-19 07:19:03 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:19:03.380742 | orchestrator | 2025-09-19 07:19:03 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:03.381237 | orchestrator | 2025-09-19 07:19:03 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:19:03.381962 | orchestrator | 2025-09-19 07:19:03 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:03.382146 | orchestrator | 2025-09-19 07:19:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:06.409839 | orchestrator | 2025-09-19 07:19:06 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:19:06.409993 | orchestrator | 2025-09-19 07:19:06 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:06.410834 | orchestrator | 2025-09-19 07:19:06 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:19:06.411625 | orchestrator | 2025-09-19 07:19:06 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:06.411752 | orchestrator | 2025-09-19 07:19:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:09.446716 | orchestrator | 2025-09-19 07:19:09 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:19:09.446935 | orchestrator | 2025-09-19 07:19:09 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:09.447715 | orchestrator | 2025-09-19 07:19:09 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:19:09.448089 | orchestrator | 2025-09-19 07:19:09 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:09.448113 | orchestrator | 2025-09-19 07:19:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:12.504500 | orchestrator | 2025-09-19 07:19:12 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:19:12.506151 | orchestrator | 2025-09-19 07:19:12 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:12.507752 | orchestrator | 2025-09-19 07:19:12 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:19:12.509458 | orchestrator | 2025-09-19 07:19:12 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:12.509504 | orchestrator | 2025-09-19 07:19:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:15.539016 | orchestrator | 2025-09-19 07:19:15 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:19:15.541132 | orchestrator | 2025-09-19 07:19:15 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:15.541989 | orchestrator | 2025-09-19 07:19:15 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:19:15.543395 | orchestrator | 2025-09-19 07:19:15 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:15.543616 | orchestrator | 2025-09-19 07:19:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:18.587446 | orchestrator | 2025-09-19 07:19:18 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:19:18.589359 | orchestrator | 2025-09-19 07:19:18 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:18.590988 | orchestrator | 2025-09-19 07:19:18 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:19:18.592377 | orchestrator | 2025-09-19 07:19:18 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:18.592402 | orchestrator | 2025-09-19 07:19:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:21.631983 | orchestrator | 2025-09-19 07:19:21 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:19:21.633736 | orchestrator | 2025-09-19 07:19:21 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:21.635077 | orchestrator | 2025-09-19 07:19:21 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:19:21.636772 | orchestrator | 2025-09-19 07:19:21 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:21.636798 | orchestrator | 2025-09-19 07:19:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:24.689429 | orchestrator | 2025-09-19 07:19:24 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:19:24.691139 | orchestrator | 2025-09-19 07:19:24 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:24.693006 | orchestrator | 2025-09-19 07:19:24 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:19:24.695165 | orchestrator | 2025-09-19 07:19:24 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:24.695211 | orchestrator | 2025-09-19 07:19:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:27.736537 | orchestrator | 2025-09-19 07:19:27 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:19:27.737666 | orchestrator | 2025-09-19 07:19:27 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:27.738760 | orchestrator | 2025-09-19 07:19:27 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state STARTED 2025-09-19 07:19:27.740064 | orchestrator | 2025-09-19 07:19:27 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:27.740149 | orchestrator | 2025-09-19 07:19:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:30.822633 | orchestrator | 2025-09-19 07:19:30.822773 | orchestrator | 2025-09-19 07:19:30.822801 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:19:30.822811 | orchestrator | 2025-09-19 07:19:30.822818 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:19:30.822826 | orchestrator | Friday 19 September 2025 07:16:31 +0000 (0:00:00.316) 0:00:00.316 ****** 2025-09-19 07:19:30.822834 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:19:30.822842 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:19:30.822850 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:19:30.822857 | orchestrator | 2025-09-19 07:19:30.822865 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:19:30.822872 | orchestrator | Friday 19 September 2025 07:16:31 +0000 (0:00:00.376) 0:00:00.692 ****** 2025-09-19 07:19:30.822879 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-19 07:19:30.822887 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-19 07:19:30.822894 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-19 07:19:30.822901 | orchestrator | 2025-09-19 07:19:30.822909 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-19 07:19:30.822916 | orchestrator | 2025-09-19 07:19:30.822923 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 07:19:30.822930 | orchestrator | Friday 19 September 2025 07:16:32 +0000 (0:00:00.513) 0:00:01.205 ****** 2025-09-19 07:19:30.822938 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:19:30.822946 | orchestrator | 2025-09-19 07:19:30.822953 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-19 07:19:30.822960 | orchestrator | Friday 19 September 2025 07:16:33 +0000 (0:00:00.577) 0:00:01.783 ****** 2025-09-19 07:19:30.822967 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-19 07:19:30.822975 | orchestrator | 2025-09-19 07:19:30.822982 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-19 07:19:30.822989 | orchestrator | Friday 19 September 2025 07:16:36 +0000 (0:00:03.944) 0:00:05.727 ****** 2025-09-19 07:19:30.822997 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-19 07:19:30.823004 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-19 07:19:30.823011 | orchestrator | 2025-09-19 07:19:30.823019 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-19 07:19:30.823026 | orchestrator | Friday 19 September 2025 07:16:43 +0000 (0:00:06.436) 0:00:12.163 ****** 2025-09-19 07:19:30.823033 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-19 07:19:30.823040 | orchestrator | 2025-09-19 07:19:30.823047 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-19 07:19:30.823055 | orchestrator | Friday 19 September 2025 07:16:47 +0000 (0:00:04.375) 0:00:16.539 ****** 2025-09-19 07:19:30.823062 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:19:30.823070 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-19 07:19:30.823077 | orchestrator | 2025-09-19 07:19:30.823084 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-19 07:19:30.823091 | orchestrator | Friday 19 September 2025 07:16:52 +0000 (0:00:04.476) 0:00:21.016 ****** 2025-09-19 07:19:30.823098 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:19:30.823105 | orchestrator | 2025-09-19 07:19:30.823113 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-19 07:19:30.823120 | orchestrator | Friday 19 September 2025 07:16:55 +0000 (0:00:03.521) 0:00:24.538 ****** 2025-09-19 07:19:30.823127 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-19 07:19:30.823153 | orchestrator | 2025-09-19 07:19:30.823160 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-19 07:19:30.823168 | orchestrator | Friday 19 September 2025 07:17:00 +0000 (0:00:04.988) 0:00:29.526 ****** 2025-09-19 07:19:30.823199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:19:30.823212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:19:30.823223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:19:30.823237 | orchestrator | 2025-09-19 07:19:30.823246 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 07:19:30.823255 | orchestrator | Friday 19 September 2025 07:17:03 +0000 (0:00:03.170) 0:00:32.696 ****** 2025-09-19 07:19:30.823263 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:19:30.823272 | orchestrator | 2025-09-19 07:19:30.823284 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-19 07:19:30.823297 | orchestrator | Friday 19 September 2025 07:17:04 +0000 (0:00:00.526) 0:00:33.222 ****** 2025-09-19 07:19:30.823306 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:19:30.823314 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:19:30.823322 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:30.823330 | orchestrator | 2025-09-19 07:19:30.823338 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-19 07:19:30.823347 | orchestrator | Friday 19 September 2025 07:17:08 +0000 (0:00:03.622) 0:00:36.845 ****** 2025-09-19 07:19:30.823355 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:19:30.823363 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:19:30.823371 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:19:30.823379 | orchestrator | 2025-09-19 07:19:30.823387 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-19 07:19:30.823396 | orchestrator | Friday 19 September 2025 07:17:09 +0000 (0:00:01.515) 0:00:38.360 ****** 2025-09-19 07:19:30.823404 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:19:30.823412 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:19:30.823420 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:19:30.823428 | orchestrator | 2025-09-19 07:19:30.823436 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-19 07:19:30.823444 | orchestrator | Friday 19 September 2025 07:17:10 +0000 (0:00:01.164) 0:00:39.525 ****** 2025-09-19 07:19:30.823452 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:19:30.823460 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:19:30.823468 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:19:30.823476 | orchestrator | 2025-09-19 07:19:30.823489 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-19 07:19:30.823497 | orchestrator | Friday 19 September 2025 07:17:11 +0000 (0:00:00.601) 0:00:40.126 ****** 2025-09-19 07:19:30.823505 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:30.823514 | orchestrator | 2025-09-19 07:19:30.823521 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-19 07:19:30.823530 | orchestrator | Friday 19 September 2025 07:17:11 +0000 (0:00:00.258) 0:00:40.384 ****** 2025-09-19 07:19:30.823538 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:30.823547 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:30.823554 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:30.823562 | orchestrator | 2025-09-19 07:19:30.823569 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 07:19:30.823576 | orchestrator | Friday 19 September 2025 07:17:11 +0000 (0:00:00.322) 0:00:40.707 ****** 2025-09-19 07:19:30.823583 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:19:30.823590 | orchestrator | 2025-09-19 07:19:30.823597 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-19 07:19:30.823604 | orchestrator | Friday 19 September 2025 07:17:12 +0000 (0:00:00.631) 0:00:41.339 ****** 2025-09-19 07:19:30.823616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:19:30.823629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:19:30.823642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:19:30.823650 | orchestrator | 2025-09-19 07:19:30.823657 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-19 07:19:30.823665 | orchestrator | Friday 19 September 2025 07:17:19 +0000 (0:00:06.768) 0:00:48.107 ****** 2025-09-19 07:19:30.823698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:19:30.823713 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:30.823721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:19:30.823729 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:30.823746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:19:30.823754 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:30.823762 | orchestrator | 2025-09-19 07:19:30.823774 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-19 07:19:30.823781 | orchestrator | Friday 19 September 2025 07:17:22 +0000 (0:00:03.144) 0:00:51.252 ****** 2025-09-19 07:19:30.823789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:19:30.823797 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:30.823813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:19:30.823822 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:30.823829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 07:19:30.823843 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:30.823850 | orchestrator | 2025-09-19 07:19:30.823857 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-19 07:19:30.823864 | orchestrator | Friday 19 September 2025 07:17:25 +0000 (0:00:03.429) 0:00:54.682 ****** 2025-09-19 07:19:30.823872 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:30.823879 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:30.823886 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:30.823893 | orchestrator | 2025-09-19 07:19:30.823900 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-19 07:19:30.823908 | orchestrator | Friday 19 September 2025 07:17:30 +0000 (0:00:04.231) 0:00:58.913 ****** 2025-09-19 07:19:30.823919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:19:30.823936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:19:30.823952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:19:30.823960 | orchestrator | 2025-09-19 07:19:30.823968 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-19 07:19:30.823975 | orchestrator | Friday 19 September 2025 07:17:35 +0000 (0:00:05.077) 0:01:03.991 ****** 2025-09-19 07:19:30.823982 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:19:30.823989 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:19:30.823996 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:30.824003 | orchestrator | 2025-09-19 07:19:30.824010 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-19 07:19:30.824017 | orchestrator | Friday 19 September 2025 07:17:42 +0000 (0:00:07.091) 0:01:11.083 ****** 2025-09-19 07:19:30.824024 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:30.824037 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:30.824044 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:30.824051 | orchestrator | 2025-09-19 07:19:30.824058 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-19 07:19:30.824232 | orchestrator | Friday 19 September 2025 07:17:46 +0000 (0:00:04.141) 0:01:15.224 ****** 2025-09-19 07:19:30.824247 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:30.824254 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:30.824261 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:30.824268 | orchestrator | 2025-09-19 07:19:30.824275 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-19 07:19:30.824283 | orchestrator | Friday 19 September 2025 07:17:52 +0000 (0:00:06.244) 0:01:21.469 ****** 2025-09-19 07:19:30.824290 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:30.824297 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:30.824304 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:30.824311 | orchestrator | 2025-09-19 07:19:30.824318 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-19 07:19:30.824325 | orchestrator | Friday 19 September 2025 07:17:57 +0000 (0:00:04.997) 0:01:26.467 ****** 2025-09-19 07:19:30.824332 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:30.824339 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:30.824346 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:30.824353 | orchestrator | 2025-09-19 07:19:30.824361 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-19 07:19:30.824368 | orchestrator | Friday 19 September 2025 07:18:01 +0000 (0:00:04.134) 0:01:30.601 ****** 2025-09-19 07:19:30.824375 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:30.824382 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:30.824389 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:30.824396 | orchestrator | 2025-09-19 07:19:30.824403 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-19 07:19:30.824410 | orchestrator | Friday 19 September 2025 07:18:02 +0000 (0:00:00.263) 0:01:30.865 ****** 2025-09-19 07:19:30.824417 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 07:19:30.824425 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:30.824432 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 07:19:30.824439 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:30.824446 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 07:19:30.824453 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:30.824460 | orchestrator | 2025-09-19 07:19:30.824468 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-19 07:19:30.824475 | orchestrator | Friday 19 September 2025 07:18:06 +0000 (0:00:04.072) 0:01:34.937 ****** 2025-09-19 07:19:30.824483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:19:30.824514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:19:30.824524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 07:19:30.824536 | orchestrator | 2025-09-19 07:19:30.824543 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 07:19:30.824551 | orchestrator | Friday 19 September 2025 07:18:09 +0000 (0:00:03.356) 0:01:38.293 ****** 2025-09-19 07:19:30.824558 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:30.824565 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:30.824572 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:30.824579 | orchestrator | 2025-09-19 07:19:30.824586 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-19 07:19:30.824593 | orchestrator | Friday 19 September 2025 07:18:09 +0000 (0:00:00.256) 0:01:38.550 ****** 2025-09-19 07:19:30.824600 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:30.824607 | orchestrator | 2025-09-19 07:19:30.824615 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-19 07:19:30.824622 | orchestrator | Friday 19 September 2025 07:18:11 +0000 (0:00:01.975) 0:01:40.526 ****** 2025-09-19 07:19:30.824629 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:30.824636 | orchestrator | 2025-09-19 07:19:30.824643 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-19 07:19:30.824650 | orchestrator | Friday 19 September 2025 07:18:14 +0000 (0:00:02.345) 0:01:42.871 ****** 2025-09-19 07:19:30.824657 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:30.824664 | orchestrator | 2025-09-19 07:19:30.824671 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-19 07:19:30.824703 | orchestrator | Friday 19 September 2025 07:18:16 +0000 (0:00:02.209) 0:01:45.080 ****** 2025-09-19 07:19:30.824711 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:30.824718 | orchestrator | 2025-09-19 07:19:30.824726 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-19 07:19:30.824733 | orchestrator | Friday 19 September 2025 07:18:46 +0000 (0:00:30.148) 0:02:15.228 ****** 2025-09-19 07:19:30.824740 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:30.824747 | orchestrator | 2025-09-19 07:19:30.824759 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 07:19:30.824770 | orchestrator | Friday 19 September 2025 07:18:49 +0000 (0:00:02.550) 0:02:17.778 ****** 2025-09-19 07:19:30.824777 | orchestrator | 2025-09-19 07:19:30.824784 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 07:19:30.824792 | orchestrator | Friday 19 September 2025 07:18:49 +0000 (0:00:00.224) 0:02:18.003 ****** 2025-09-19 07:19:30.824799 | orchestrator | 2025-09-19 07:19:30.824806 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 07:19:30.824813 | orchestrator | Friday 19 September 2025 07:18:49 +0000 (0:00:00.226) 0:02:18.229 ****** 2025-09-19 07:19:30.824820 | orchestrator | 2025-09-19 07:19:30.824827 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-19 07:19:30.824835 | orchestrator | Friday 19 September 2025 07:18:49 +0000 (0:00:00.171) 0:02:18.400 ****** 2025-09-19 07:19:30.824843 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:30.824851 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:19:30.824859 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:19:30.824867 | orchestrator | 2025-09-19 07:19:30.824876 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:19:30.824885 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 07:19:30.824895 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 07:19:30.824903 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 07:19:30.824911 | orchestrator | 2025-09-19 07:19:30.824919 | orchestrator | 2025-09-19 07:19:30.824932 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:19:30.824941 | orchestrator | Friday 19 September 2025 07:19:27 +0000 (0:00:38.270) 0:02:56.671 ****** 2025-09-19 07:19:30.824949 | orchestrator | =============================================================================== 2025-09-19 07:19:30.824957 | orchestrator | glance : Restart glance-api container ---------------------------------- 38.27s 2025-09-19 07:19:30.824965 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.15s 2025-09-19 07:19:30.824973 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.09s 2025-09-19 07:19:30.824981 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.77s 2025-09-19 07:19:30.824990 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.44s 2025-09-19 07:19:30.824997 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.24s 2025-09-19 07:19:30.825006 | orchestrator | glance : Copying over config.json files for services -------------------- 5.08s 2025-09-19 07:19:30.825014 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.00s 2025-09-19 07:19:30.825022 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.99s 2025-09-19 07:19:30.825030 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.48s 2025-09-19 07:19:30.825038 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 4.38s 2025-09-19 07:19:30.825046 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.23s 2025-09-19 07:19:30.825054 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.14s 2025-09-19 07:19:30.825062 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.13s 2025-09-19 07:19:30.825070 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.07s 2025-09-19 07:19:30.825078 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.94s 2025-09-19 07:19:30.825087 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.62s 2025-09-19 07:19:30.825094 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.52s 2025-09-19 07:19:30.825102 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.43s 2025-09-19 07:19:30.825111 | orchestrator | glance : Check glance containers ---------------------------------------- 3.36s 2025-09-19 07:19:30.825119 | orchestrator | 2025-09-19 07:19:30 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:19:30.825127 | orchestrator | 2025-09-19 07:19:30 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:19:30.825136 | orchestrator | 2025-09-19 07:19:30 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:30.825144 | orchestrator | 2025-09-19 07:19:30 | INFO  | Task 61408106-1e3a-4c19-804b-c0c02af53aae is in state SUCCESS 2025-09-19 07:19:30.825152 | orchestrator | 2025-09-19 07:19:30 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:30.825160 | orchestrator | 2025-09-19 07:19:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:33.851000 | orchestrator | 2025-09-19 07:19:33 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state STARTED 2025-09-19 07:19:33.852349 | orchestrator | 2025-09-19 07:19:33 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:19:33.854758 | orchestrator | 2025-09-19 07:19:33 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:33.856498 | orchestrator | 2025-09-19 07:19:33 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:33.856575 | orchestrator | 2025-09-19 07:19:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:36.912643 | orchestrator | 2025-09-19 07:19:36 | INFO  | Task de649183-cb1b-46eb-85e7-482b70c7ca39 is in state SUCCESS 2025-09-19 07:19:36.913725 | orchestrator | 2025-09-19 07:19:36.913762 | orchestrator | 2025-09-19 07:19:36.913774 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:19:36.913786 | orchestrator | 2025-09-19 07:19:36.913797 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:19:36.913808 | orchestrator | Friday 19 September 2025 07:16:24 +0000 (0:00:00.282) 0:00:00.282 ****** 2025-09-19 07:19:36.913819 | orchestrator | ok: [testbed-manager] 2025-09-19 07:19:36.913831 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:19:36.913842 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:19:36.913853 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:19:36.913864 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:19:36.913874 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:19:36.913885 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:19:36.913896 | orchestrator | 2025-09-19 07:19:36.913907 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:19:36.913918 | orchestrator | Friday 19 September 2025 07:16:25 +0000 (0:00:00.818) 0:00:01.101 ****** 2025-09-19 07:19:36.913930 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-19 07:19:36.913941 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-19 07:19:36.913952 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-19 07:19:36.913962 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-19 07:19:36.913973 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-19 07:19:36.913983 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-19 07:19:36.913994 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-19 07:19:36.914005 | orchestrator | 2025-09-19 07:19:36.914066 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-19 07:19:36.914080 | orchestrator | 2025-09-19 07:19:36.914092 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-19 07:19:36.914103 | orchestrator | Friday 19 September 2025 07:16:25 +0000 (0:00:00.689) 0:00:01.791 ****** 2025-09-19 07:19:36.914144 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:19:36.914158 | orchestrator | 2025-09-19 07:19:36.914168 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-19 07:19:36.914179 | orchestrator | Friday 19 September 2025 07:16:27 +0000 (0:00:01.302) 0:00:03.093 ****** 2025-09-19 07:19:36.914194 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 07:19:36.914209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.914244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.914389 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.914424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.914440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.914453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.914466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.914480 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.914493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.914515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.914533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.914557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.914571 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 07:19:36.914587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.914600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.914614 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.914634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.914651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.914669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.914704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.914715 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.914727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.914738 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.914756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.914767 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.914783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.914802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.914813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.914824 | orchestrator | 2025-09-19 07:19:36.914836 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-19 07:19:36.914847 | orchestrator | Friday 19 September 2025 07:16:31 +0000 (0:00:03.854) 0:00:06.948 ****** 2025-09-19 07:19:36.914859 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:19:36.914870 | orchestrator | 2025-09-19 07:19:36.914881 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-19 07:19:36.914892 | orchestrator | Friday 19 September 2025 07:16:32 +0000 (0:00:01.698) 0:00:08.646 ****** 2025-09-19 07:19:36.914919 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 07:19:36.914939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.914951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.914967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.914985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.914997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.915008 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.915019 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.915030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.915047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.915059 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.915075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.915093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.915105 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.915116 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.915127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.915144 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 07:19:36.915157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.915173 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.915191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.915202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.915214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.915225 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.915248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.915260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.915271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.915415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.915438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.915450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.915461 | orchestrator | 2025-09-19 07:19:36.915472 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-19 07:19:36.915483 | orchestrator | Friday 19 September 2025 07:16:39 +0000 (0:00:06.367) 0:00:15.013 ****** 2025-09-19 07:19:36.915494 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 07:19:36.915513 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:19:36.915525 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.915541 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 07:19:36.915559 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.915571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:19:36.915582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.915600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.915611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:19:36.915622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.915633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.915649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.915665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.915694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:19:36.915733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.915824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.915837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.915848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.915859 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:19:36.915870 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:19:36.915881 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:36.915891 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:36.915902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:19:36.915921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.915942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.915953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.915976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.915987 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:36.915998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:19:36.916010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.916021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.916031 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:19:36.916043 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:19:36.916059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.916079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.916097 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:19:36.916108 | orchestrator | 2025-09-19 07:19:36.916119 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-19 07:19:36.916130 | orchestrator | Friday 19 September 2025 07:16:40 +0000 (0:00:01.569) 0:00:16.583 ****** 2025-09-19 07:19:36.916141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:19:36.916152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.916164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.916280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.916293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.916311 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 07:19:36.916338 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:19:36.916350 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.916361 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 07:19:36.916373 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.916384 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:36.916395 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:19:36.916407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:19:36.916417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.916434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.916458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.916470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.916481 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:36.916492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:19:36.916503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.916514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.916525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.916536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 07:19:36.916557 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:36.916574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:19:36.916586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.916597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.916608 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:19:36.916619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:19:36.916630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.916670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.916700 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:19:36.916712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 07:19:36.916734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.917504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 07:19:36.917911 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:19:36.917934 | orchestrator | 2025-09-19 07:19:36.917946 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-19 07:19:36.917957 | orchestrator | Friday 19 September 2025 07:16:42 +0000 (0:00:01.817) 0:00:18.400 ****** 2025-09-19 07:19:36.917969 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 07:19:36.917981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.917992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.918003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.918014 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.918095 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.918122 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.918134 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.918145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.918155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.918166 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.918176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.918192 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.918206 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.918223 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.918233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.918244 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 07:19:36.918255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.918265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.918282 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.918292 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.918312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.918322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.918332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.918342 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.918352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.918368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.918378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.918392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.918402 | orchestrator | 2025-09-19 07:19:36.918412 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-19 07:19:36.918421 | orchestrator | Friday 19 September 2025 07:16:48 +0000 (0:00:06.087) 0:00:24.488 ****** 2025-09-19 07:19:36.918431 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:19:36.918441 | orchestrator | 2025-09-19 07:19:36.918451 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-19 07:19:36.918465 | orchestrator | Friday 19 September 2025 07:16:49 +0000 (0:00:01.032) 0:00:25.521 ****** 2025-09-19 07:19:36.918476 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1109835, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2300339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918487 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1109835, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2300339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918497 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1109908, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2498512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918513 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1109835, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2300339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.918523 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1109835, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2300339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918533 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1109835, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2300339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918552 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1109908, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2498512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918566 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1109835, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2300339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918577 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1109908, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2498512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918588 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1109835, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2300339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918605 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1109822, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.227766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918616 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1109908, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2498512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918627 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1109908, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2498512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918651 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1109822, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.227766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918662 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1109822, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.227766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918691 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1109908, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2498512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918702 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1109881, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2449577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918719 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1109822, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.227766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918730 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1109881, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2449577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918741 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1109822, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.227766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918762 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1109881, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2449577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918774 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1109822, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.227766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918785 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1109908, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2498512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.918803 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1109807, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2062724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918814 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1109807, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2062724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918825 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1109881, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2449577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918836 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1109881, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2449577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918852 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1109836, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2310743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918869 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1109807, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2062724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918880 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1109807, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2062724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918898 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1109836, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2310743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918908 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1109881, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2449577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918918 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1109807, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2062724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918927 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1109836, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2310743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918941 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1109836, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2310743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918956 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1109807, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2062724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918966 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1109836, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2310743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918982 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1109870, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2418144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.918992 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1109822, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.227766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.919001 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1109870, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2418144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919011 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1109870, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2418144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919025 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1109870, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2418144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919040 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1109870, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2418144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919051 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1109836, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2310743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919066 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1109840, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2317662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919076 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1109840, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2317662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919086 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1109840, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2317662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919096 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1109870, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2418144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919109 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1109840, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2317662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919125 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1109833, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2287662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919140 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1109840, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2317662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919150 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1109833, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2287662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919160 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1109833, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2287662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919170 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109897, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2488835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919180 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109897, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2488835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919201 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1109881, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2449577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.919216 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109802, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2045739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919232 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1109840, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2317662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919241 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109897, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2488835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919251 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109802, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2045739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919261 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1109833, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2287662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919270 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1109833, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2287662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919284 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109802, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2045739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919299 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1109923, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.274767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919317 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1109833, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2287662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919327 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109897, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2488835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919337 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1109923, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.274767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919347 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109897, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2488835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919356 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1109923, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.274767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919370 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1109807, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2062724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.919392 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109897, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2488835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919402 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1109891, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2472463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919412 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1109891, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2472463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919422 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109802, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2045739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919432 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109802, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2045739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919442 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1109891, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2472463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919455 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109802, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2045739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919476 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1109923, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.274767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919487 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109818, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2068284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919496 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109818, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2068284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919506 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1109836, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2310743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.919516 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1109923, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.274767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919526 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109818, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2068284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919540 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1109923, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.274767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919563 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1109891, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2472463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919573 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1109804, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.204877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919637 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1109891, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2472463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919649 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1109804, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.204877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919659 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109818, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2068284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919669 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1109804, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.204877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919702 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1109891, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2472463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919719 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1109850, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2359288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919729 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109818, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2068284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919744 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1109850, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2359288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919755 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1109804, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.204877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919765 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1109850, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2359288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919775 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1109841, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2359288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919794 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1109870, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2418144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.919804 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1109841, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2359288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919814 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109818, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2068284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919829 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1109841, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2359288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919839 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1109804, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.204877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919849 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1109850, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2359288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919859 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1109916, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2737668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919874 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:36.919889 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1109916, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2737668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919898 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:36.919908 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1109916, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2737668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919918 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:19:36.919928 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1109804, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.204877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919942 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1109850, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2359288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919952 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1109841, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2359288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919962 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1109850, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2359288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919979 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1109841, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2359288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.919993 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1109916, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2737668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.920003 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:36.920013 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1109916, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2737668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.920023 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1109841, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2359288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.920032 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:19:36.920046 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1109840, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2317662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.920056 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1109916, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2737668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 07:19:36.920066 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:19:36.920076 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1109833, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2287662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.920095 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109897, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2488835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.920110 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109802, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2045739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.920120 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1109923, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.274767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.920130 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1109891, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2472463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.920145 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109818, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2068284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.920155 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1109804, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.204877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.920164 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1109850, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2359288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.920180 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1109841, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2359288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.920194 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1109916, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2737668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 07:19:36.920204 | orchestrator | 2025-09-19 07:19:36.920214 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-19 07:19:36.920224 | orchestrator | Friday 19 September 2025 07:17:11 +0000 (0:00:21.997) 0:00:47.518 ****** 2025-09-19 07:19:36.920234 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:19:36.920243 | orchestrator | 2025-09-19 07:19:36.920252 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-19 07:19:36.920262 | orchestrator | Friday 19 September 2025 07:17:12 +0000 (0:00:00.785) 0:00:48.304 ****** 2025-09-19 07:19:36.920272 | orchestrator | [WARNING]: Skipped 2025-09-19 07:19:36.920282 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:19:36.920292 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-19 07:19:36.920302 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:19:36.920311 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-19 07:19:36.920321 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:19:36.920331 | orchestrator | [WARNING]: Skipped 2025-09-19 07:19:36.920340 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:19:36.920350 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-19 07:19:36.920359 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:19:36.920369 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-19 07:19:36.920378 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:19:36.920388 | orchestrator | [WARNING]: Skipped 2025-09-19 07:19:36.920397 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:19:36.920407 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-19 07:19:36.920416 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:19:36.920430 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-19 07:19:36.920440 | orchestrator | [WARNING]: Skipped 2025-09-19 07:19:36.920450 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:19:36.920459 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-19 07:19:36.920475 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:19:36.920484 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-19 07:19:36.920494 | orchestrator | [WARNING]: Skipped 2025-09-19 07:19:36.920503 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:19:36.920513 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-19 07:19:36.920522 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:19:36.920531 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-19 07:19:36.920541 | orchestrator | [WARNING]: Skipped 2025-09-19 07:19:36.920550 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:19:36.920559 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-19 07:19:36.920569 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:19:36.920579 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-19 07:19:36.920588 | orchestrator | [WARNING]: Skipped 2025-09-19 07:19:36.920597 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:19:36.920607 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-19 07:19:36.920616 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 07:19:36.920626 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-19 07:19:36.920635 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 07:19:36.920645 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 07:19:36.920654 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 07:19:36.920663 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 07:19:36.920673 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 07:19:36.920698 | orchestrator | 2025-09-19 07:19:36.920708 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-19 07:19:36.920718 | orchestrator | Friday 19 September 2025 07:17:16 +0000 (0:00:04.014) 0:00:52.319 ****** 2025-09-19 07:19:36.920727 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 07:19:36.920737 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:36.920746 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 07:19:36.920756 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:36.920765 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 07:19:36.920775 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:36.920784 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 07:19:36.920794 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:19:36.920803 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 07:19:36.920813 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:19:36.920822 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 07:19:36.920836 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:19:36.920846 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-19 07:19:36.920856 | orchestrator | 2025-09-19 07:19:36.920865 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-19 07:19:36.920875 | orchestrator | Friday 19 September 2025 07:17:33 +0000 (0:00:17.006) 0:01:09.325 ****** 2025-09-19 07:19:36.920884 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 07:19:36.920894 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 07:19:36.920909 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:36.920918 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:36.920928 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 07:19:36.920937 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:36.920947 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 07:19:36.920956 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:19:36.920965 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 07:19:36.920975 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:19:36.920984 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 07:19:36.920994 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:19:36.921003 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-19 07:19:36.921012 | orchestrator | 2025-09-19 07:19:36.921022 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-19 07:19:36.921031 | orchestrator | Friday 19 September 2025 07:17:37 +0000 (0:00:03.652) 0:01:12.977 ****** 2025-09-19 07:19:36.921041 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 07:19:36.921055 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:36.921065 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 07:19:36.921075 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 07:19:36.921085 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:36.921094 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:36.921104 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-19 07:19:36.921114 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 07:19:36.921123 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:19:36.921133 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 07:19:36.921143 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:19:36.921152 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 07:19:36.921162 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:19:36.921171 | orchestrator | 2025-09-19 07:19:36.921181 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-19 07:19:36.921190 | orchestrator | Friday 19 September 2025 07:17:39 +0000 (0:00:02.435) 0:01:15.413 ****** 2025-09-19 07:19:36.921200 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:19:36.921209 | orchestrator | 2025-09-19 07:19:36.921218 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-19 07:19:36.921228 | orchestrator | Friday 19 September 2025 07:17:40 +0000 (0:00:00.729) 0:01:16.142 ****** 2025-09-19 07:19:36.921237 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:19:36.921247 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:36.921256 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:36.921266 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:36.921275 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:19:36.921284 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:19:36.921294 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:19:36.921303 | orchestrator | 2025-09-19 07:19:36.921312 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-19 07:19:36.921328 | orchestrator | Friday 19 September 2025 07:17:41 +0000 (0:00:00.894) 0:01:17.036 ****** 2025-09-19 07:19:36.921337 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:19:36.921346 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:19:36.921356 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:19:36.921365 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:19:36.921374 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:36.921384 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:19:36.921393 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:19:36.921402 | orchestrator | 2025-09-19 07:19:36.921412 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-19 07:19:36.921422 | orchestrator | Friday 19 September 2025 07:17:43 +0000 (0:00:02.152) 0:01:19.189 ****** 2025-09-19 07:19:36.921431 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 07:19:36.921441 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:36.921454 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 07:19:36.921464 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 07:19:36.921474 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:19:36.921483 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:36.921492 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 07:19:36.921502 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:36.921511 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 07:19:36.921521 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:19:36.921530 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 07:19:36.921539 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:19:36.921549 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 07:19:36.921558 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:19:36.921567 | orchestrator | 2025-09-19 07:19:36.921577 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-19 07:19:36.921586 | orchestrator | Friday 19 September 2025 07:17:45 +0000 (0:00:02.728) 0:01:21.918 ****** 2025-09-19 07:19:36.921596 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 07:19:36.921605 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:36.921615 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 07:19:36.921624 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:36.921634 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 07:19:36.921643 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:36.921653 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 07:19:36.921662 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:19:36.921700 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-19 07:19:36.921711 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 07:19:36.921721 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:19:36.921730 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 07:19:36.921740 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:19:36.921749 | orchestrator | 2025-09-19 07:19:36.921759 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-19 07:19:36.921774 | orchestrator | Friday 19 September 2025 07:17:48 +0000 (0:00:02.485) 0:01:24.403 ****** 2025-09-19 07:19:36.921784 | orchestrator | [WARNING]: Skipped 2025-09-19 07:19:36.921793 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-19 07:19:36.921803 | orchestrator | due to this access issue: 2025-09-19 07:19:36.921812 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-19 07:19:36.921822 | orchestrator | not a directory 2025-09-19 07:19:36.921831 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 07:19:36.921841 | orchestrator | 2025-09-19 07:19:36.921850 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-19 07:19:36.921860 | orchestrator | Friday 19 September 2025 07:17:51 +0000 (0:00:02.753) 0:01:27.157 ****** 2025-09-19 07:19:36.921869 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:19:36.921878 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:36.921888 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:36.921897 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:36.921906 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:19:36.921916 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:19:36.921925 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:19:36.921935 | orchestrator | 2025-09-19 07:19:36.921944 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-19 07:19:36.921954 | orchestrator | Friday 19 September 2025 07:17:52 +0000 (0:00:01.411) 0:01:28.568 ****** 2025-09-19 07:19:36.921963 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:19:36.921972 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:19:36.921982 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:19:36.921991 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:19:36.922000 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:19:36.922009 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:19:36.922055 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:19:36.922067 | orchestrator | 2025-09-19 07:19:36.922077 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-19 07:19:36.922087 | orchestrator | Friday 19 September 2025 07:17:54 +0000 (0:00:01.390) 0:01:29.959 ****** 2025-09-19 07:19:36.922097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.922112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.922122 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 07:19:36.922144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.922155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.922165 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.922175 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.922185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.922199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.922209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.922219 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 07:19:36.922239 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.922250 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.922259 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.922269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.922279 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.922294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.922315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.922333 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.922348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.922358 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 07:19:36.922370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.922380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.922393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.922404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 07:19:36.922419 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.922433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.922444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.922453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 07:19:36.922463 | orchestrator | 2025-09-19 07:19:36.922473 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-19 07:19:36.922482 | orchestrator | Friday 19 September 2025 07:17:59 +0000 (0:00:05.221) 0:01:35.180 ****** 2025-09-19 07:19:36.922492 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 07:19:36.922502 | orchestrator | skipping: [testbed-manager] 2025-09-19 07:19:36.922511 | orchestrator | 2025-09-19 07:19:36.922521 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 07:19:36.922530 | orchestrator | Friday 19 September 2025 07:18:00 +0000 (0:00:01.470) 0:01:36.651 ****** 2025-09-19 07:19:36.922540 | orchestrator | 2025-09-19 07:19:36.922549 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 07:19:36.922559 | orchestrator | Friday 19 September 2025 07:18:00 +0000 (0:00:00.128) 0:01:36.779 ****** 2025-09-19 07:19:36.922568 | orchestrator | 2025-09-19 07:19:36.922578 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 07:19:36.922587 | orchestrator | Friday 19 September 2025 07:18:01 +0000 (0:00:00.143) 0:01:36.923 ****** 2025-09-19 07:19:36.922597 | orchestrator | 2025-09-19 07:19:36.922607 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 07:19:36.922616 | orchestrator | Friday 19 September 2025 07:18:01 +0000 (0:00:00.146) 0:01:37.069 ****** 2025-09-19 07:19:36.922626 | orchestrator | 2025-09-19 07:19:36.922635 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 07:19:36.922645 | orchestrator | Friday 19 September 2025 07:18:01 +0000 (0:00:00.343) 0:01:37.413 ****** 2025-09-19 07:19:36.922654 | orchestrator | 2025-09-19 07:19:36.922669 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 07:19:36.922698 | orchestrator | Friday 19 September 2025 07:18:01 +0000 (0:00:00.088) 0:01:37.501 ****** 2025-09-19 07:19:36.922707 | orchestrator | 2025-09-19 07:19:36.922717 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 07:19:36.922730 | orchestrator | Friday 19 September 2025 07:18:01 +0000 (0:00:00.048) 0:01:37.550 ****** 2025-09-19 07:19:36.922740 | orchestrator | 2025-09-19 07:19:36.922749 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-19 07:19:36.922759 | orchestrator | Friday 19 September 2025 07:18:01 +0000 (0:00:00.066) 0:01:37.617 ****** 2025-09-19 07:19:36.922768 | orchestrator | changed: [testbed-manager] 2025-09-19 07:19:36.922778 | orchestrator | 2025-09-19 07:19:36.922787 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-19 07:19:36.922797 | orchestrator | Friday 19 September 2025 07:18:18 +0000 (0:00:16.547) 0:01:54.164 ****** 2025-09-19 07:19:36.922806 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:19:36.922816 | orchestrator | changed: [testbed-manager] 2025-09-19 07:19:36.922825 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:19:36.922834 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:36.922844 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:19:36.922853 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:19:36.922863 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:19:36.922872 | orchestrator | 2025-09-19 07:19:36.922882 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-19 07:19:36.922892 | orchestrator | Friday 19 September 2025 07:18:31 +0000 (0:00:13.734) 0:02:07.899 ****** 2025-09-19 07:19:36.922901 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:19:36.922911 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:19:36.922920 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:36.922930 | orchestrator | 2025-09-19 07:19:36.922939 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-19 07:19:36.922949 | orchestrator | Friday 19 September 2025 07:18:37 +0000 (0:00:05.192) 0:02:13.091 ****** 2025-09-19 07:19:36.922958 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:19:36.922967 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:19:36.922977 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:36.922986 | orchestrator | 2025-09-19 07:19:36.922996 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-19 07:19:36.923005 | orchestrator | Friday 19 September 2025 07:18:47 +0000 (0:00:10.257) 0:02:23.349 ****** 2025-09-19 07:19:36.923015 | orchestrator | changed: [testbed-manager] 2025-09-19 07:19:36.923024 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:19:36.923033 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:19:36.923043 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:19:36.923058 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:19:36.923067 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:19:36.923077 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:36.923086 | orchestrator | 2025-09-19 07:19:36.923096 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-19 07:19:36.923105 | orchestrator | Friday 19 September 2025 07:19:02 +0000 (0:00:15.229) 0:02:38.578 ****** 2025-09-19 07:19:36.923115 | orchestrator | changed: [testbed-manager] 2025-09-19 07:19:36.923124 | orchestrator | 2025-09-19 07:19:36.923134 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-19 07:19:36.923144 | orchestrator | Friday 19 September 2025 07:19:14 +0000 (0:00:11.356) 0:02:49.935 ****** 2025-09-19 07:19:36.923153 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:19:36.923162 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:19:36.923172 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:19:36.923181 | orchestrator | 2025-09-19 07:19:36.923191 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-19 07:19:36.923200 | orchestrator | Friday 19 September 2025 07:19:23 +0000 (0:00:09.524) 0:02:59.460 ****** 2025-09-19 07:19:36.923215 | orchestrator | changed: [testbed-manager] 2025-09-19 07:19:36.923225 | orchestrator | 2025-09-19 07:19:36.923234 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-19 07:19:36.923244 | orchestrator | Friday 19 September 2025 07:19:28 +0000 (0:00:05.310) 0:03:04.771 ****** 2025-09-19 07:19:36.923253 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:19:36.923263 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:19:36.923272 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:19:36.923281 | orchestrator | 2025-09-19 07:19:36.923291 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:19:36.923300 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 07:19:36.923310 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 07:19:36.923320 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 07:19:36.923330 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 07:19:36.923339 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 07:19:36.923349 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 07:19:36.923358 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 07:19:36.923367 | orchestrator | 2025-09-19 07:19:36.923377 | orchestrator | 2025-09-19 07:19:36.923386 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:19:36.923396 | orchestrator | Friday 19 September 2025 07:19:35 +0000 (0:00:06.553) 0:03:11.325 ****** 2025-09-19 07:19:36.923406 | orchestrator | =============================================================================== 2025-09-19 07:19:36.923419 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 22.00s 2025-09-19 07:19:36.923429 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.01s 2025-09-19 07:19:36.923438 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.55s 2025-09-19 07:19:36.923448 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.23s 2025-09-19 07:19:36.923457 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.73s 2025-09-19 07:19:36.923467 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.36s 2025-09-19 07:19:36.923476 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.26s 2025-09-19 07:19:36.923485 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.52s 2025-09-19 07:19:36.923495 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.55s 2025-09-19 07:19:36.923504 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.37s 2025-09-19 07:19:36.923513 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.09s 2025-09-19 07:19:36.923523 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.31s 2025-09-19 07:19:36.923532 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.22s 2025-09-19 07:19:36.923541 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.19s 2025-09-19 07:19:36.923551 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 4.01s 2025-09-19 07:19:36.923560 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.85s 2025-09-19 07:19:36.923574 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.65s 2025-09-19 07:19:36.923584 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 2.75s 2025-09-19 07:19:36.923594 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.73s 2025-09-19 07:19:36.923603 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.49s 2025-09-19 07:19:36.923617 | orchestrator | 2025-09-19 07:19:36 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:19:36.923627 | orchestrator | 2025-09-19 07:19:36 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:36.923636 | orchestrator | 2025-09-19 07:19:36 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:36.923646 | orchestrator | 2025-09-19 07:19:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:39.954618 | orchestrator | 2025-09-19 07:19:39 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:19:39.957294 | orchestrator | 2025-09-19 07:19:39 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:19:39.958799 | orchestrator | 2025-09-19 07:19:39 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:39.960912 | orchestrator | 2025-09-19 07:19:39 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:39.960960 | orchestrator | 2025-09-19 07:19:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:43.011942 | orchestrator | 2025-09-19 07:19:43 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:19:43.012775 | orchestrator | 2025-09-19 07:19:43 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:19:43.014920 | orchestrator | 2025-09-19 07:19:43 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:43.016795 | orchestrator | 2025-09-19 07:19:43 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:43.016828 | orchestrator | 2025-09-19 07:19:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:46.058149 | orchestrator | 2025-09-19 07:19:46 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:19:46.058896 | orchestrator | 2025-09-19 07:19:46 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:19:46.060351 | orchestrator | 2025-09-19 07:19:46 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:46.061887 | orchestrator | 2025-09-19 07:19:46 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:46.061911 | orchestrator | 2025-09-19 07:19:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:49.107521 | orchestrator | 2025-09-19 07:19:49 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:19:49.108032 | orchestrator | 2025-09-19 07:19:49 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:19:49.108999 | orchestrator | 2025-09-19 07:19:49 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:49.111331 | orchestrator | 2025-09-19 07:19:49 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:49.111361 | orchestrator | 2025-09-19 07:19:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:52.158788 | orchestrator | 2025-09-19 07:19:52 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:19:52.160074 | orchestrator | 2025-09-19 07:19:52 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:19:52.161791 | orchestrator | 2025-09-19 07:19:52 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:52.163937 | orchestrator | 2025-09-19 07:19:52 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:52.163979 | orchestrator | 2025-09-19 07:19:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:55.210275 | orchestrator | 2025-09-19 07:19:55 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:19:55.210369 | orchestrator | 2025-09-19 07:19:55 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:19:55.211266 | orchestrator | 2025-09-19 07:19:55 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:55.212944 | orchestrator | 2025-09-19 07:19:55 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:55.213030 | orchestrator | 2025-09-19 07:19:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:19:58.254211 | orchestrator | 2025-09-19 07:19:58 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:19:58.254867 | orchestrator | 2025-09-19 07:19:58 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:19:58.256222 | orchestrator | 2025-09-19 07:19:58 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:19:58.257629 | orchestrator | 2025-09-19 07:19:58 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:19:58.257720 | orchestrator | 2025-09-19 07:19:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:01.300748 | orchestrator | 2025-09-19 07:20:01 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:01.302615 | orchestrator | 2025-09-19 07:20:01 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:01.305329 | orchestrator | 2025-09-19 07:20:01 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:01.307881 | orchestrator | 2025-09-19 07:20:01 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:20:01.307916 | orchestrator | 2025-09-19 07:20:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:04.349417 | orchestrator | 2025-09-19 07:20:04 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:04.351128 | orchestrator | 2025-09-19 07:20:04 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:04.352303 | orchestrator | 2025-09-19 07:20:04 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:04.353171 | orchestrator | 2025-09-19 07:20:04 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:20:04.353197 | orchestrator | 2025-09-19 07:20:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:07.399501 | orchestrator | 2025-09-19 07:20:07 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:07.401091 | orchestrator | 2025-09-19 07:20:07 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:07.403135 | orchestrator | 2025-09-19 07:20:07 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:07.405763 | orchestrator | 2025-09-19 07:20:07 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:20:07.405888 | orchestrator | 2025-09-19 07:20:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:10.448055 | orchestrator | 2025-09-19 07:20:10 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:10.448163 | orchestrator | 2025-09-19 07:20:10 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:10.448554 | orchestrator | 2025-09-19 07:20:10 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:10.449639 | orchestrator | 2025-09-19 07:20:10 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:20:10.449662 | orchestrator | 2025-09-19 07:20:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:13.500118 | orchestrator | 2025-09-19 07:20:13 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:13.500323 | orchestrator | 2025-09-19 07:20:13 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:13.501124 | orchestrator | 2025-09-19 07:20:13 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:13.501865 | orchestrator | 2025-09-19 07:20:13 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:20:13.502014 | orchestrator | 2025-09-19 07:20:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:16.529910 | orchestrator | 2025-09-19 07:20:16 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:16.531871 | orchestrator | 2025-09-19 07:20:16 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:16.533031 | orchestrator | 2025-09-19 07:20:16 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:16.533709 | orchestrator | 2025-09-19 07:20:16 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:20:16.534080 | orchestrator | 2025-09-19 07:20:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:19.573935 | orchestrator | 2025-09-19 07:20:19 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:19.575841 | orchestrator | 2025-09-19 07:20:19 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:19.582571 | orchestrator | 2025-09-19 07:20:19 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:19.586131 | orchestrator | 2025-09-19 07:20:19 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:20:19.586191 | orchestrator | 2025-09-19 07:20:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:22.619222 | orchestrator | 2025-09-19 07:20:22 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:22.620778 | orchestrator | 2025-09-19 07:20:22 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:22.623794 | orchestrator | 2025-09-19 07:20:22 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:22.625951 | orchestrator | 2025-09-19 07:20:22 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:20:22.626232 | orchestrator | 2025-09-19 07:20:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:25.650932 | orchestrator | 2025-09-19 07:20:25 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:25.651141 | orchestrator | 2025-09-19 07:20:25 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:25.652503 | orchestrator | 2025-09-19 07:20:25 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:25.654573 | orchestrator | 2025-09-19 07:20:25 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state STARTED 2025-09-19 07:20:25.654606 | orchestrator | 2025-09-19 07:20:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:28.692609 | orchestrator | 2025-09-19 07:20:28 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:28.693319 | orchestrator | 2025-09-19 07:20:28 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:28.694096 | orchestrator | 2025-09-19 07:20:28 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:28.696036 | orchestrator | 2025-09-19 07:20:28 | INFO  | Task 1d1faaab-1490-4d69-b5d2-6b4379746d53 is in state SUCCESS 2025-09-19 07:20:28.697953 | orchestrator | 2025-09-19 07:20:28.698814 | orchestrator | 2025-09-19 07:20:28.698828 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:20:28.698841 | orchestrator | 2025-09-19 07:20:28.698852 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:20:28.698863 | orchestrator | Friday 19 September 2025 07:16:39 +0000 (0:00:00.237) 0:00:00.237 ****** 2025-09-19 07:20:28.698875 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:20:28.698888 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:20:28.698899 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:20:28.698909 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:20:28.698920 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:20:28.698930 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:20:28.698941 | orchestrator | 2025-09-19 07:20:28.698952 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:20:28.698972 | orchestrator | Friday 19 September 2025 07:16:40 +0000 (0:00:01.160) 0:00:01.397 ****** 2025-09-19 07:20:28.698983 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-19 07:20:28.698995 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-19 07:20:28.699007 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-19 07:20:28.699017 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-19 07:20:28.699028 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-19 07:20:28.699039 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-19 07:20:28.699049 | orchestrator | 2025-09-19 07:20:28.699060 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-19 07:20:28.699071 | orchestrator | 2025-09-19 07:20:28.699082 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 07:20:28.699093 | orchestrator | Friday 19 September 2025 07:16:40 +0000 (0:00:00.560) 0:00:01.958 ****** 2025-09-19 07:20:28.699104 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:20:28.699117 | orchestrator | 2025-09-19 07:20:28.699127 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-19 07:20:28.699138 | orchestrator | Friday 19 September 2025 07:16:42 +0000 (0:00:01.189) 0:00:03.147 ****** 2025-09-19 07:20:28.699149 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-19 07:20:28.699160 | orchestrator | 2025-09-19 07:20:28.699170 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-19 07:20:28.699181 | orchestrator | Friday 19 September 2025 07:16:45 +0000 (0:00:03.917) 0:00:07.065 ****** 2025-09-19 07:20:28.699191 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-19 07:20:28.699202 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-19 07:20:28.699213 | orchestrator | 2025-09-19 07:20:28.699223 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-19 07:20:28.699260 | orchestrator | Friday 19 September 2025 07:16:52 +0000 (0:00:07.013) 0:00:14.079 ****** 2025-09-19 07:20:28.699272 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:20:28.699282 | orchestrator | 2025-09-19 07:20:28.699293 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-19 07:20:28.699304 | orchestrator | Friday 19 September 2025 07:16:56 +0000 (0:00:03.578) 0:00:17.658 ****** 2025-09-19 07:20:28.699314 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:20:28.699325 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-19 07:20:28.699336 | orchestrator | 2025-09-19 07:20:28.699346 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-19 07:20:28.699357 | orchestrator | Friday 19 September 2025 07:17:00 +0000 (0:00:04.406) 0:00:22.065 ****** 2025-09-19 07:20:28.699368 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:20:28.699378 | orchestrator | 2025-09-19 07:20:28.699391 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-19 07:20:28.699403 | orchestrator | Friday 19 September 2025 07:17:04 +0000 (0:00:03.922) 0:00:25.988 ****** 2025-09-19 07:20:28.699415 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-19 07:20:28.699427 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-19 07:20:28.699439 | orchestrator | 2025-09-19 07:20:28.699452 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-19 07:20:28.699465 | orchestrator | Friday 19 September 2025 07:17:13 +0000 (0:00:08.646) 0:00:34.634 ****** 2025-09-19 07:20:28.699481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.699552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.699569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.699591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.699605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.699619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.699687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.699710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.699725 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.699752 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.699766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.699779 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.699792 | orchestrator | 2025-09-19 07:20:28.699837 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 07:20:28.699850 | orchestrator | Friday 19 September 2025 07:17:17 +0000 (0:00:04.231) 0:00:38.866 ****** 2025-09-19 07:20:28.699861 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:20:28.699872 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:20:28.699883 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:20:28.699893 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:20:28.699904 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:20:28.699915 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:20:28.699925 | orchestrator | 2025-09-19 07:20:28.699936 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 07:20:28.699947 | orchestrator | Friday 19 September 2025 07:17:18 +0000 (0:00:00.449) 0:00:39.316 ****** 2025-09-19 07:20:28.699963 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:20:28.699974 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:20:28.699984 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:20:28.699995 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:20:28.700013 | orchestrator | 2025-09-19 07:20:28.700024 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-19 07:20:28.700034 | orchestrator | Friday 19 September 2025 07:17:18 +0000 (0:00:00.788) 0:00:40.104 ****** 2025-09-19 07:20:28.700045 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-19 07:20:28.700056 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-19 07:20:28.700067 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-19 07:20:28.700077 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-19 07:20:28.700088 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-19 07:20:28.700099 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-19 07:20:28.700110 | orchestrator | 2025-09-19 07:20:28.700120 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-19 07:20:28.700131 | orchestrator | Friday 19 September 2025 07:17:20 +0000 (0:00:01.728) 0:00:41.832 ****** 2025-09-19 07:20:28.700143 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 07:20:28.700157 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 07:20:28.700169 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 07:20:28.700211 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 07:20:28.700235 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 07:20:28.700247 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 07:20:28.700259 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 07:20:28.700271 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 07:20:28.700316 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 07:20:28.700337 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 07:20:28.700349 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 07:20:28.700361 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 07:20:28.700372 | orchestrator | 2025-09-19 07:20:28.700383 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-19 07:20:28.700394 | orchestrator | Friday 19 September 2025 07:17:24 +0000 (0:00:03.581) 0:00:45.414 ****** 2025-09-19 07:20:28.700405 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:20:28.700417 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:20:28.700427 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 07:20:28.700438 | orchestrator | 2025-09-19 07:20:28.700449 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-19 07:20:28.700460 | orchestrator | Friday 19 September 2025 07:17:26 +0000 (0:00:01.946) 0:00:47.360 ****** 2025-09-19 07:20:28.700470 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-19 07:20:28.700481 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-19 07:20:28.700498 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-19 07:20:28.700509 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 07:20:28.700520 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 07:20:28.700559 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 07:20:28.700572 | orchestrator | 2025-09-19 07:20:28.700583 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-19 07:20:28.700593 | orchestrator | Friday 19 September 2025 07:17:29 +0000 (0:00:03.259) 0:00:50.619 ****** 2025-09-19 07:20:28.700604 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-19 07:20:28.700615 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-19 07:20:28.700626 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-19 07:20:28.700637 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-19 07:20:28.700648 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-19 07:20:28.700717 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-19 07:20:28.700730 | orchestrator | 2025-09-19 07:20:28.700746 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-19 07:20:28.700758 | orchestrator | Friday 19 September 2025 07:17:30 +0000 (0:00:00.985) 0:00:51.604 ****** 2025-09-19 07:20:28.700769 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:20:28.700780 | orchestrator | 2025-09-19 07:20:28.700789 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-19 07:20:28.700799 | orchestrator | Friday 19 September 2025 07:17:30 +0000 (0:00:00.192) 0:00:51.797 ****** 2025-09-19 07:20:28.700808 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:20:28.700817 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:20:28.700827 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:20:28.700837 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:20:28.700846 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:20:28.700856 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:20:28.700865 | orchestrator | 2025-09-19 07:20:28.700874 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 07:20:28.700884 | orchestrator | Friday 19 September 2025 07:17:31 +0000 (0:00:00.948) 0:00:52.745 ****** 2025-09-19 07:20:28.700895 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:20:28.700906 | orchestrator | 2025-09-19 07:20:28.700915 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-19 07:20:28.700925 | orchestrator | Friday 19 September 2025 07:17:32 +0000 (0:00:01.300) 0:00:54.045 ****** 2025-09-19 07:20:28.700935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.700945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.700994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.701011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701021 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701031 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701124 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701134 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701150 | orchestrator | 2025-09-19 07:20:28.701160 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-19 07:20:28.701169 | orchestrator | Friday 19 September 2025 07:17:36 +0000 (0:00:03.135) 0:00:57.181 ****** 2025-09-19 07:20:28.701180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:20:28.701195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701205 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:20:28.701220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:20:28.701230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:20:28.701240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701259 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:20:28.701269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701279 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:20:28.701289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701322 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:20:28.701332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701357 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:20:28.701367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701387 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:20:28.701396 | orchestrator | 2025-09-19 07:20:28.701406 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-19 07:20:28.701415 | orchestrator | Friday 19 September 2025 07:17:38 +0000 (0:00:02.242) 0:00:59.424 ****** 2025-09-19 07:20:28.701436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701456 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:20:28.701466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:20:28.701502 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:20:28.701518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701529 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:20:28.701543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:20:28.701553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701569 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:20:28.701579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:20:28.701589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701598 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:20:28.701614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.701639 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:20:28.701648 | orchestrator | 2025-09-19 07:20:28.701700 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-19 07:20:28.701712 | orchestrator | Friday 19 September 2025 07:17:40 +0000 (0:00:01.982) 0:01:01.406 ****** 2025-09-19 07:20:28.701722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.701739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.701749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.701766 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701797 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701847 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.701884 | orchestrator | 2025-09-19 07:20:28.701893 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-19 07:20:28.701903 | orchestrator | Friday 19 September 2025 07:17:43 +0000 (0:00:03.256) 0:01:04.663 ****** 2025-09-19 07:20:28.701913 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 07:20:28.701922 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:20:28.701932 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 07:20:28.701942 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:20:28.701952 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 07:20:28.701961 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 07:20:28.701971 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:20:28.701981 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 07:20:28.701990 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 07:20:28.701999 | orchestrator | 2025-09-19 07:20:28.702006 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-19 07:20:28.702039 | orchestrator | Friday 19 September 2025 07:17:46 +0000 (0:00:02.733) 0:01:07.397 ****** 2025-09-19 07:20:28.702049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.702066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.702087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.702096 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702116 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702160 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702177 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702185 | orchestrator | 2025-09-19 07:20:28.702193 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-19 07:20:28.702201 | orchestrator | Friday 19 September 2025 07:17:57 +0000 (0:00:11.534) 0:01:18.932 ****** 2025-09-19 07:20:28.702218 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:20:28.702226 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:20:28.702234 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:20:28.702242 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:20:28.702250 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:20:28.702257 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:20:28.702265 | orchestrator | 2025-09-19 07:20:28.702273 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-19 07:20:28.702281 | orchestrator | Friday 19 September 2025 07:18:00 +0000 (0:00:02.322) 0:01:21.254 ****** 2025-09-19 07:20:28.702293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:20:28.702302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.702310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:20:28.702319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.702327 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:20:28.702335 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:20:28.702346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 07:20:28.702363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.702371 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:20:28.702379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.702388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.702396 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:20:28.702404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.702413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.702430 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:20:28.702446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.702455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 07:20:28.702463 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:20:28.702471 | orchestrator | 2025-09-19 07:20:28.702479 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-19 07:20:28.702487 | orchestrator | Friday 19 September 2025 07:18:01 +0000 (0:00:01.734) 0:01:22.989 ****** 2025-09-19 07:20:28.702495 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:20:28.702503 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:20:28.702510 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:20:28.702518 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:20:28.702526 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:20:28.702534 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:20:28.702541 | orchestrator | 2025-09-19 07:20:28.702549 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-19 07:20:28.702557 | orchestrator | Friday 19 September 2025 07:18:02 +0000 (0:00:00.595) 0:01:23.584 ****** 2025-09-19 07:20:28.702565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.702592 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.702613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 07:20:28.702634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702693 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:20:28.702715 | orchestrator | 2025-09-19 07:20:28.702724 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 07:20:28.702732 | orchestrator | Friday 19 September 2025 07:18:05 +0000 (0:00:02.909) 0:01:26.493 ****** 2025-09-19 07:20:28.702739 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:20:28.702748 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:20:28.702755 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:20:28.702763 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:20:28.702771 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:20:28.702779 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:20:28.702787 | orchestrator | 2025-09-19 07:20:28.702795 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-19 07:20:28.702803 | orchestrator | Friday 19 September 2025 07:18:05 +0000 (0:00:00.573) 0:01:27.066 ****** 2025-09-19 07:20:28.702811 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:20:28.702818 | orchestrator | 2025-09-19 07:20:28.702826 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-19 07:20:28.702834 | orchestrator | Friday 19 September 2025 07:18:08 +0000 (0:00:02.318) 0:01:29.385 ****** 2025-09-19 07:20:28.702842 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:20:28.702850 | orchestrator | 2025-09-19 07:20:28.702857 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-19 07:20:28.702865 | orchestrator | Friday 19 September 2025 07:18:10 +0000 (0:00:01.900) 0:01:31.286 ****** 2025-09-19 07:20:28.702873 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:20:28.702881 | orchestrator | 2025-09-19 07:20:28.702889 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 07:20:28.702897 | orchestrator | Friday 19 September 2025 07:18:29 +0000 (0:00:19.647) 0:01:50.933 ****** 2025-09-19 07:20:28.702904 | orchestrator | 2025-09-19 07:20:28.702916 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 07:20:28.702925 | orchestrator | Friday 19 September 2025 07:18:29 +0000 (0:00:00.065) 0:01:50.998 ****** 2025-09-19 07:20:28.702932 | orchestrator | 2025-09-19 07:20:28.702940 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 07:20:28.702948 | orchestrator | Friday 19 September 2025 07:18:29 +0000 (0:00:00.061) 0:01:51.059 ****** 2025-09-19 07:20:28.702956 | orchestrator | 2025-09-19 07:20:28.702964 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 07:20:28.702972 | orchestrator | Friday 19 September 2025 07:18:30 +0000 (0:00:00.065) 0:01:51.125 ****** 2025-09-19 07:20:28.702980 | orchestrator | 2025-09-19 07:20:28.702991 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 07:20:28.702999 | orchestrator | Friday 19 September 2025 07:18:30 +0000 (0:00:00.067) 0:01:51.192 ****** 2025-09-19 07:20:28.703007 | orchestrator | 2025-09-19 07:20:28.703014 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 07:20:28.703022 | orchestrator | Friday 19 September 2025 07:18:30 +0000 (0:00:00.063) 0:01:51.255 ****** 2025-09-19 07:20:28.703030 | orchestrator | 2025-09-19 07:20:28.703038 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-19 07:20:28.703046 | orchestrator | Friday 19 September 2025 07:18:30 +0000 (0:00:00.069) 0:01:51.325 ****** 2025-09-19 07:20:28.703054 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:20:28.703062 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:20:28.703070 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:20:28.703077 | orchestrator | 2025-09-19 07:20:28.703085 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-19 07:20:28.703093 | orchestrator | Friday 19 September 2025 07:18:51 +0000 (0:00:21.691) 0:02:13.016 ****** 2025-09-19 07:20:28.703106 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:20:28.703114 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:20:28.703122 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:20:28.703130 | orchestrator | 2025-09-19 07:20:28.703138 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-19 07:20:28.703146 | orchestrator | Friday 19 September 2025 07:19:03 +0000 (0:00:11.544) 0:02:24.562 ****** 2025-09-19 07:20:28.703153 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:20:28.703161 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:20:28.703169 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:20:28.703177 | orchestrator | 2025-09-19 07:20:28.703185 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-19 07:20:28.703196 | orchestrator | Friday 19 September 2025 07:20:14 +0000 (0:01:10.796) 0:03:35.359 ****** 2025-09-19 07:20:28.703204 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:20:28.703212 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:20:28.703220 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:20:28.703228 | orchestrator | 2025-09-19 07:20:28.703236 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-19 07:20:28.703244 | orchestrator | Friday 19 September 2025 07:20:26 +0000 (0:00:12.026) 0:03:47.386 ****** 2025-09-19 07:20:28.703252 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:20:28.703259 | orchestrator | 2025-09-19 07:20:28.703267 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:20:28.703275 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 07:20:28.703283 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 07:20:28.703292 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 07:20:28.703300 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 07:20:28.703308 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 07:20:28.703316 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 07:20:28.703323 | orchestrator | 2025-09-19 07:20:28.703331 | orchestrator | 2025-09-19 07:20:28.703339 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:20:28.703347 | orchestrator | Friday 19 September 2025 07:20:27 +0000 (0:00:01.287) 0:03:48.674 ****** 2025-09-19 07:20:28.703355 | orchestrator | =============================================================================== 2025-09-19 07:20:28.703363 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 70.80s 2025-09-19 07:20:28.703371 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 21.69s 2025-09-19 07:20:28.703378 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.65s 2025-09-19 07:20:28.703386 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.03s 2025-09-19 07:20:28.703394 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.54s 2025-09-19 07:20:28.703402 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.53s 2025-09-19 07:20:28.703410 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.65s 2025-09-19 07:20:28.703418 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.01s 2025-09-19 07:20:28.703430 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.41s 2025-09-19 07:20:28.703444 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 4.23s 2025-09-19 07:20:28.703452 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.92s 2025-09-19 07:20:28.703460 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.92s 2025-09-19 07:20:28.703468 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.58s 2025-09-19 07:20:28.703475 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.58s 2025-09-19 07:20:28.703483 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.26s 2025-09-19 07:20:28.703495 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.26s 2025-09-19 07:20:28.703503 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.14s 2025-09-19 07:20:28.703510 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.91s 2025-09-19 07:20:28.703518 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.73s 2025-09-19 07:20:28.703526 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.32s 2025-09-19 07:20:28.703534 | orchestrator | 2025-09-19 07:20:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:31.720102 | orchestrator | 2025-09-19 07:20:31 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:31.720209 | orchestrator | 2025-09-19 07:20:31 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:31.721086 | orchestrator | 2025-09-19 07:20:31 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:31.721994 | orchestrator | 2025-09-19 07:20:31 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:20:31.722103 | orchestrator | 2025-09-19 07:20:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:34.748147 | orchestrator | 2025-09-19 07:20:34 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:34.748233 | orchestrator | 2025-09-19 07:20:34 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:34.748248 | orchestrator | 2025-09-19 07:20:34 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:34.748260 | orchestrator | 2025-09-19 07:20:34 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:20:34.748270 | orchestrator | 2025-09-19 07:20:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:37.774211 | orchestrator | 2025-09-19 07:20:37 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:37.774517 | orchestrator | 2025-09-19 07:20:37 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:37.775313 | orchestrator | 2025-09-19 07:20:37 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:37.777447 | orchestrator | 2025-09-19 07:20:37 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:20:37.777462 | orchestrator | 2025-09-19 07:20:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:40.804272 | orchestrator | 2025-09-19 07:20:40 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:40.804357 | orchestrator | 2025-09-19 07:20:40 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:40.804860 | orchestrator | 2025-09-19 07:20:40 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:40.806335 | orchestrator | 2025-09-19 07:20:40 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:20:40.806386 | orchestrator | 2025-09-19 07:20:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:43.833232 | orchestrator | 2025-09-19 07:20:43 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:43.834609 | orchestrator | 2025-09-19 07:20:43 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:43.835084 | orchestrator | 2025-09-19 07:20:43 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:43.835737 | orchestrator | 2025-09-19 07:20:43 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:20:43.835763 | orchestrator | 2025-09-19 07:20:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:46.863243 | orchestrator | 2025-09-19 07:20:46 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:46.863498 | orchestrator | 2025-09-19 07:20:46 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:46.865005 | orchestrator | 2025-09-19 07:20:46 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:46.865789 | orchestrator | 2025-09-19 07:20:46 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:20:46.865833 | orchestrator | 2025-09-19 07:20:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:49.905149 | orchestrator | 2025-09-19 07:20:49 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:49.905235 | orchestrator | 2025-09-19 07:20:49 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:49.905251 | orchestrator | 2025-09-19 07:20:49 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:49.905263 | orchestrator | 2025-09-19 07:20:49 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:20:49.905274 | orchestrator | 2025-09-19 07:20:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:52.908912 | orchestrator | 2025-09-19 07:20:52 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:52.909016 | orchestrator | 2025-09-19 07:20:52 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:52.909527 | orchestrator | 2025-09-19 07:20:52 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:52.910642 | orchestrator | 2025-09-19 07:20:52 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:20:52.910715 | orchestrator | 2025-09-19 07:20:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:55.936925 | orchestrator | 2025-09-19 07:20:55 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:55.938168 | orchestrator | 2025-09-19 07:20:55 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:55.939748 | orchestrator | 2025-09-19 07:20:55 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:55.940326 | orchestrator | 2025-09-19 07:20:55 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:20:55.940351 | orchestrator | 2025-09-19 07:20:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:20:58.968322 | orchestrator | 2025-09-19 07:20:58 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:20:58.970335 | orchestrator | 2025-09-19 07:20:58 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:20:58.971829 | orchestrator | 2025-09-19 07:20:58 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:20:58.972859 | orchestrator | 2025-09-19 07:20:58 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:20:58.972879 | orchestrator | 2025-09-19 07:20:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:02.005235 | orchestrator | 2025-09-19 07:21:02 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:02.006250 | orchestrator | 2025-09-19 07:21:02 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:02.006565 | orchestrator | 2025-09-19 07:21:02 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:02.007277 | orchestrator | 2025-09-19 07:21:02 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:02.007309 | orchestrator | 2025-09-19 07:21:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:05.033872 | orchestrator | 2025-09-19 07:21:05 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:05.034934 | orchestrator | 2025-09-19 07:21:05 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:05.035901 | orchestrator | 2025-09-19 07:21:05 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:05.036985 | orchestrator | 2025-09-19 07:21:05 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:05.037011 | orchestrator | 2025-09-19 07:21:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:08.098208 | orchestrator | 2025-09-19 07:21:08 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:08.099235 | orchestrator | 2025-09-19 07:21:08 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:08.100028 | orchestrator | 2025-09-19 07:21:08 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:08.100851 | orchestrator | 2025-09-19 07:21:08 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:08.100875 | orchestrator | 2025-09-19 07:21:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:11.144817 | orchestrator | 2025-09-19 07:21:11 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:11.145294 | orchestrator | 2025-09-19 07:21:11 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:11.146135 | orchestrator | 2025-09-19 07:21:11 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:11.147155 | orchestrator | 2025-09-19 07:21:11 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:11.147182 | orchestrator | 2025-09-19 07:21:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:14.177985 | orchestrator | 2025-09-19 07:21:14 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:14.178223 | orchestrator | 2025-09-19 07:21:14 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:14.178775 | orchestrator | 2025-09-19 07:21:14 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:14.179419 | orchestrator | 2025-09-19 07:21:14 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:14.179442 | orchestrator | 2025-09-19 07:21:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:17.200171 | orchestrator | 2025-09-19 07:21:17 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:17.200283 | orchestrator | 2025-09-19 07:21:17 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:17.200540 | orchestrator | 2025-09-19 07:21:17 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:17.201025 | orchestrator | 2025-09-19 07:21:17 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:17.201047 | orchestrator | 2025-09-19 07:21:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:20.221666 | orchestrator | 2025-09-19 07:21:20 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:20.221803 | orchestrator | 2025-09-19 07:21:20 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:20.221819 | orchestrator | 2025-09-19 07:21:20 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:20.222170 | orchestrator | 2025-09-19 07:21:20 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:20.222198 | orchestrator | 2025-09-19 07:21:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:23.258266 | orchestrator | 2025-09-19 07:21:23 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:23.258365 | orchestrator | 2025-09-19 07:21:23 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:23.259061 | orchestrator | 2025-09-19 07:21:23 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:23.259594 | orchestrator | 2025-09-19 07:21:23 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:23.259775 | orchestrator | 2025-09-19 07:21:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:26.295049 | orchestrator | 2025-09-19 07:21:26 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:26.295157 | orchestrator | 2025-09-19 07:21:26 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:26.295887 | orchestrator | 2025-09-19 07:21:26 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:26.296605 | orchestrator | 2025-09-19 07:21:26 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:26.296800 | orchestrator | 2025-09-19 07:21:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:29.324411 | orchestrator | 2025-09-19 07:21:29 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:29.324941 | orchestrator | 2025-09-19 07:21:29 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:29.325470 | orchestrator | 2025-09-19 07:21:29 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:29.326119 | orchestrator | 2025-09-19 07:21:29 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:29.326306 | orchestrator | 2025-09-19 07:21:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:32.349621 | orchestrator | 2025-09-19 07:21:32 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:32.349716 | orchestrator | 2025-09-19 07:21:32 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:32.350193 | orchestrator | 2025-09-19 07:21:32 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:32.350728 | orchestrator | 2025-09-19 07:21:32 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:32.350751 | orchestrator | 2025-09-19 07:21:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:35.398467 | orchestrator | 2025-09-19 07:21:35 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:35.398656 | orchestrator | 2025-09-19 07:21:35 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:35.399184 | orchestrator | 2025-09-19 07:21:35 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:35.399654 | orchestrator | 2025-09-19 07:21:35 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:35.400348 | orchestrator | 2025-09-19 07:21:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:38.431185 | orchestrator | 2025-09-19 07:21:38 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:38.431350 | orchestrator | 2025-09-19 07:21:38 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:38.432989 | orchestrator | 2025-09-19 07:21:38 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:38.433252 | orchestrator | 2025-09-19 07:21:38 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:38.433265 | orchestrator | 2025-09-19 07:21:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:41.452975 | orchestrator | 2025-09-19 07:21:41 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:41.454760 | orchestrator | 2025-09-19 07:21:41 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:41.454794 | orchestrator | 2025-09-19 07:21:41 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:41.454803 | orchestrator | 2025-09-19 07:21:41 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:41.454811 | orchestrator | 2025-09-19 07:21:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:44.481188 | orchestrator | 2025-09-19 07:21:44 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:44.481267 | orchestrator | 2025-09-19 07:21:44 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state STARTED 2025-09-19 07:21:44.481282 | orchestrator | 2025-09-19 07:21:44 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:44.481587 | orchestrator | 2025-09-19 07:21:44 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:44.481618 | orchestrator | 2025-09-19 07:21:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:47.507158 | orchestrator | 2025-09-19 07:21:47 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:21:47.507242 | orchestrator | 2025-09-19 07:21:47 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:47.508031 | orchestrator | 2025-09-19 07:21:47 | INFO  | Task 803c824e-d7f2-4155-9240-2c00923ba9ce is in state SUCCESS 2025-09-19 07:21:47.509673 | orchestrator | 2025-09-19 07:21:47.509703 | orchestrator | 2025-09-19 07:21:47.509715 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:21:47.509836 | orchestrator | 2025-09-19 07:21:47.509849 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:21:47.509861 | orchestrator | Friday 19 September 2025 07:19:39 +0000 (0:00:00.254) 0:00:00.254 ****** 2025-09-19 07:21:47.510171 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:21:47.510189 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:21:47.510200 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:21:47.510211 | orchestrator | 2025-09-19 07:21:47.510223 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:21:47.510258 | orchestrator | Friday 19 September 2025 07:19:40 +0000 (0:00:00.311) 0:00:00.566 ****** 2025-09-19 07:21:47.510269 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-19 07:21:47.510280 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-19 07:21:47.510291 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-19 07:21:47.510302 | orchestrator | 2025-09-19 07:21:47.510313 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-19 07:21:47.510324 | orchestrator | 2025-09-19 07:21:47.510334 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 07:21:47.510345 | orchestrator | Friday 19 September 2025 07:19:40 +0000 (0:00:00.405) 0:00:00.972 ****** 2025-09-19 07:21:47.510356 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:21:47.510367 | orchestrator | 2025-09-19 07:21:47.510378 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-19 07:21:47.510400 | orchestrator | Friday 19 September 2025 07:19:41 +0000 (0:00:00.547) 0:00:01.519 ****** 2025-09-19 07:21:47.510411 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-19 07:21:47.510422 | orchestrator | 2025-09-19 07:21:47.510433 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-19 07:21:47.510446 | orchestrator | Friday 19 September 2025 07:19:44 +0000 (0:00:03.506) 0:00:05.025 ****** 2025-09-19 07:21:47.510464 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-19 07:21:47.510484 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-19 07:21:47.510503 | orchestrator | 2025-09-19 07:21:47.510522 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-19 07:21:47.510541 | orchestrator | Friday 19 September 2025 07:19:51 +0000 (0:00:06.532) 0:00:11.558 ****** 2025-09-19 07:21:47.510561 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:21:47.510574 | orchestrator | 2025-09-19 07:21:47.510585 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-19 07:21:47.510595 | orchestrator | Friday 19 September 2025 07:19:54 +0000 (0:00:03.406) 0:00:14.965 ****** 2025-09-19 07:21:47.510606 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:21:47.510617 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-19 07:21:47.510627 | orchestrator | 2025-09-19 07:21:47.510638 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-19 07:21:47.510649 | orchestrator | Friday 19 September 2025 07:19:58 +0000 (0:00:03.759) 0:00:18.724 ****** 2025-09-19 07:21:47.510659 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:21:47.510670 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-19 07:21:47.510681 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-19 07:21:47.510692 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-19 07:21:47.510702 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-19 07:21:47.510713 | orchestrator | 2025-09-19 07:21:47.510724 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-19 07:21:47.510734 | orchestrator | Friday 19 September 2025 07:20:14 +0000 (0:00:16.157) 0:00:34.882 ****** 2025-09-19 07:21:47.510745 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-19 07:21:47.510755 | orchestrator | 2025-09-19 07:21:47.510766 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-19 07:21:47.510777 | orchestrator | Friday 19 September 2025 07:20:18 +0000 (0:00:04.318) 0:00:39.201 ****** 2025-09-19 07:21:47.510792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.510851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.510872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.510886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.510901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.510914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.510942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.510957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.510975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.510986 | orchestrator | 2025-09-19 07:21:47.510997 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-19 07:21:47.511009 | orchestrator | Friday 19 September 2025 07:20:20 +0000 (0:00:01.774) 0:00:40.976 ****** 2025-09-19 07:21:47.511020 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-19 07:21:47.511031 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-19 07:21:47.511042 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-19 07:21:47.511052 | orchestrator | 2025-09-19 07:21:47.511063 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-19 07:21:47.511074 | orchestrator | Friday 19 September 2025 07:20:22 +0000 (0:00:01.357) 0:00:42.333 ****** 2025-09-19 07:21:47.511084 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:21:47.511095 | orchestrator | 2025-09-19 07:21:47.511106 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-19 07:21:47.511117 | orchestrator | Friday 19 September 2025 07:20:22 +0000 (0:00:00.126) 0:00:42.460 ****** 2025-09-19 07:21:47.511127 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:21:47.511138 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:21:47.511149 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:21:47.511160 | orchestrator | 2025-09-19 07:21:47.511171 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 07:21:47.511181 | orchestrator | Friday 19 September 2025 07:20:22 +0000 (0:00:00.429) 0:00:42.889 ****** 2025-09-19 07:21:47.511192 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:21:47.511209 | orchestrator | 2025-09-19 07:21:47.511220 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-19 07:21:47.511231 | orchestrator | Friday 19 September 2025 07:20:23 +0000 (0:00:00.517) 0:00:43.407 ****** 2025-09-19 07:21:47.511242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.511261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.511278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.511290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.511302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.511319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.511330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.511349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.511361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.511372 | orchestrator | 2025-09-19 07:21:47.511383 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-19 07:21:47.511394 | orchestrator | Friday 19 September 2025 07:20:27 +0000 (0:00:03.938) 0:00:47.346 ****** 2025-09-19 07:21:47.511410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:21:47.511427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.511439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.511451 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:21:47.511468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:21:47.511480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:21:47.511495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.511507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.511523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.511535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.511546 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:21:47.511557 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:21:47.511568 | orchestrator | 2025-09-19 07:21:47.511579 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-19 07:21:47.511590 | orchestrator | Friday 19 September 2025 07:20:28 +0000 (0:00:01.288) 0:00:48.634 ****** 2025-09-19 07:21:47.511608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:21:47.511624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.511636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.511660 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:21:47.511671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:21:47.511683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.511694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.511705 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:21:47.511723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:21:47.511739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.511757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.511768 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:21:47.511779 | orchestrator | 2025-09-19 07:21:47.511790 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-19 07:21:47.511817 | orchestrator | Friday 19 September 2025 07:20:29 +0000 (0:00:01.411) 0:00:50.046 ****** 2025-09-19 07:21:47.511829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.511847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.511859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.511881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.511892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.511904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.511915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.511932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.511943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.511954 | orchestrator | 2025-09-19 07:21:47.511966 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-19 07:21:47.511977 | orchestrator | Friday 19 September 2025 07:20:33 +0000 (0:00:03.908) 0:00:53.954 ****** 2025-09-19 07:21:47.511994 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:21:47.512005 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:21:47.512016 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:21:47.512027 | orchestrator | 2025-09-19 07:21:47.512038 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-19 07:21:47.512048 | orchestrator | Friday 19 September 2025 07:20:36 +0000 (0:00:02.988) 0:00:56.943 ****** 2025-09-19 07:21:47.512059 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:21:47.512070 | orchestrator | 2025-09-19 07:21:47.512085 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-19 07:21:47.512096 | orchestrator | Friday 19 September 2025 07:20:37 +0000 (0:00:00.755) 0:00:57.698 ****** 2025-09-19 07:21:47.512107 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:21:47.512118 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:21:47.512129 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:21:47.512140 | orchestrator | 2025-09-19 07:21:47.512150 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-19 07:21:47.512161 | orchestrator | Friday 19 September 2025 07:20:37 +0000 (0:00:00.499) 0:00:58.197 ****** 2025-09-19 07:21:47.512172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.512184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.512201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.512219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.512234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.512245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.512257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.512268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.512279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.512290 | orchestrator | 2025-09-19 07:21:47.512301 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-19 07:21:47.512317 | orchestrator | Friday 19 September 2025 07:20:47 +0000 (0:00:09.574) 0:01:07.771 ****** 2025-09-19 07:21:47.512335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:21:47.512351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.512363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.512374 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:21:47.512386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:21:47.512397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.512414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.512431 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:21:47.512446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 07:21:47.512458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.512470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:21:47.512481 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:21:47.512491 | orchestrator | 2025-09-19 07:21:47.512505 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-19 07:21:47.512525 | orchestrator | Friday 19 September 2025 07:20:48 +0000 (0:00:00.889) 0:01:08.661 ****** 2025-09-19 07:21:47.512546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.512584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.512612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 07:21:47.512632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.512649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.512669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.512690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.512737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.512750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:21:47.512761 | orchestrator | 2025-09-19 07:21:47.512772 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 07:21:47.512783 | orchestrator | Friday 19 September 2025 07:20:52 +0000 (0:00:03.765) 0:01:12.426 ****** 2025-09-19 07:21:47.512794 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:21:47.512842 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:21:47.512853 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:21:47.512864 | orchestrator | 2025-09-19 07:21:47.512875 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-19 07:21:47.512886 | orchestrator | Friday 19 September 2025 07:20:52 +0000 (0:00:00.496) 0:01:12.922 ****** 2025-09-19 07:21:47.512896 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:21:47.512907 | orchestrator | 2025-09-19 07:21:47.512918 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-19 07:21:47.512929 | orchestrator | Friday 19 September 2025 07:20:54 +0000 (0:00:02.358) 0:01:15.280 ****** 2025-09-19 07:21:47.512940 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:21:47.512950 | orchestrator | 2025-09-19 07:21:47.512961 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-19 07:21:47.512972 | orchestrator | Friday 19 September 2025 07:20:57 +0000 (0:00:02.250) 0:01:17.530 ****** 2025-09-19 07:21:47.512983 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:21:47.512994 | orchestrator | 2025-09-19 07:21:47.513005 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 07:21:47.513016 | orchestrator | Friday 19 September 2025 07:21:09 +0000 (0:00:12.214) 0:01:29.745 ****** 2025-09-19 07:21:47.513026 | orchestrator | 2025-09-19 07:21:47.513037 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 07:21:47.513048 | orchestrator | Friday 19 September 2025 07:21:09 +0000 (0:00:00.080) 0:01:29.825 ****** 2025-09-19 07:21:47.513059 | orchestrator | 2025-09-19 07:21:47.513070 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 07:21:47.513081 | orchestrator | Friday 19 September 2025 07:21:09 +0000 (0:00:00.066) 0:01:29.892 ****** 2025-09-19 07:21:47.513091 | orchestrator | 2025-09-19 07:21:47.513102 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-19 07:21:47.513113 | orchestrator | Friday 19 September 2025 07:21:09 +0000 (0:00:00.070) 0:01:29.962 ****** 2025-09-19 07:21:47.513130 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:21:47.513141 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:21:47.513152 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:21:47.513163 | orchestrator | 2025-09-19 07:21:47.513173 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-19 07:21:47.513184 | orchestrator | Friday 19 September 2025 07:21:23 +0000 (0:00:13.628) 0:01:43.591 ****** 2025-09-19 07:21:47.513195 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:21:47.513206 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:21:47.513217 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:21:47.513228 | orchestrator | 2025-09-19 07:21:47.513238 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-19 07:21:47.513249 | orchestrator | Friday 19 September 2025 07:21:37 +0000 (0:00:13.787) 0:01:57.379 ****** 2025-09-19 07:21:47.513260 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:21:47.513271 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:21:47.513282 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:21:47.513292 | orchestrator | 2025-09-19 07:21:47.513303 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:21:47.513315 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 07:21:47.513326 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 07:21:47.513337 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 07:21:47.513348 | orchestrator | 2025-09-19 07:21:47.513359 | orchestrator | 2025-09-19 07:21:47.513370 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:21:47.513380 | orchestrator | Friday 19 September 2025 07:21:44 +0000 (0:00:06.975) 0:02:04.354 ****** 2025-09-19 07:21:47.513391 | orchestrator | =============================================================================== 2025-09-19 07:21:47.513402 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.16s 2025-09-19 07:21:47.513419 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 13.79s 2025-09-19 07:21:47.513430 | orchestrator | barbican : Restart barbican-api container ------------------------------ 13.63s 2025-09-19 07:21:47.513441 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.21s 2025-09-19 07:21:47.513451 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.57s 2025-09-19 07:21:47.513462 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.98s 2025-09-19 07:21:47.513473 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.53s 2025-09-19 07:21:47.513484 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.32s 2025-09-19 07:21:47.513494 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.94s 2025-09-19 07:21:47.513505 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.91s 2025-09-19 07:21:47.513516 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.77s 2025-09-19 07:21:47.513527 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.76s 2025-09-19 07:21:47.513537 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.51s 2025-09-19 07:21:47.513548 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.41s 2025-09-19 07:21:47.513559 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.99s 2025-09-19 07:21:47.513569 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.36s 2025-09-19 07:21:47.513585 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.25s 2025-09-19 07:21:47.513602 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.77s 2025-09-19 07:21:47.513613 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.41s 2025-09-19 07:21:47.513623 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.36s 2025-09-19 07:21:47.513634 | orchestrator | 2025-09-19 07:21:47 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:47.513645 | orchestrator | 2025-09-19 07:21:47 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:47.513656 | orchestrator | 2025-09-19 07:21:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:50.537056 | orchestrator | 2025-09-19 07:21:50 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:21:50.537537 | orchestrator | 2025-09-19 07:21:50 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:50.538469 | orchestrator | 2025-09-19 07:21:50 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:50.539847 | orchestrator | 2025-09-19 07:21:50 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:50.539890 | orchestrator | 2025-09-19 07:21:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:53.570314 | orchestrator | 2025-09-19 07:21:53 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:21:53.570519 | orchestrator | 2025-09-19 07:21:53 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:53.571230 | orchestrator | 2025-09-19 07:21:53 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:53.571995 | orchestrator | 2025-09-19 07:21:53 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:53.572037 | orchestrator | 2025-09-19 07:21:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:56.599987 | orchestrator | 2025-09-19 07:21:56 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:21:56.600429 | orchestrator | 2025-09-19 07:21:56 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:56.601243 | orchestrator | 2025-09-19 07:21:56 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:56.602184 | orchestrator | 2025-09-19 07:21:56 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:56.602213 | orchestrator | 2025-09-19 07:21:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:21:59.647353 | orchestrator | 2025-09-19 07:21:59 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:21:59.647449 | orchestrator | 2025-09-19 07:21:59 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:21:59.647875 | orchestrator | 2025-09-19 07:21:59 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:21:59.648800 | orchestrator | 2025-09-19 07:21:59 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:21:59.649067 | orchestrator | 2025-09-19 07:21:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:02.689365 | orchestrator | 2025-09-19 07:22:02 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:02.690076 | orchestrator | 2025-09-19 07:22:02 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:02.690749 | orchestrator | 2025-09-19 07:22:02 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:02.691781 | orchestrator | 2025-09-19 07:22:02 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:02.691825 | orchestrator | 2025-09-19 07:22:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:05.730276 | orchestrator | 2025-09-19 07:22:05 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:05.732165 | orchestrator | 2025-09-19 07:22:05 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:05.733330 | orchestrator | 2025-09-19 07:22:05 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:05.735751 | orchestrator | 2025-09-19 07:22:05 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:05.735831 | orchestrator | 2025-09-19 07:22:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:08.776609 | orchestrator | 2025-09-19 07:22:08 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:08.776704 | orchestrator | 2025-09-19 07:22:08 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:08.777630 | orchestrator | 2025-09-19 07:22:08 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:08.778197 | orchestrator | 2025-09-19 07:22:08 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:08.778281 | orchestrator | 2025-09-19 07:22:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:11.834342 | orchestrator | 2025-09-19 07:22:11 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:11.835748 | orchestrator | 2025-09-19 07:22:11 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:11.837658 | orchestrator | 2025-09-19 07:22:11 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:11.840630 | orchestrator | 2025-09-19 07:22:11 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:11.840669 | orchestrator | 2025-09-19 07:22:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:14.871144 | orchestrator | 2025-09-19 07:22:14 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:14.872374 | orchestrator | 2025-09-19 07:22:14 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:14.873842 | orchestrator | 2025-09-19 07:22:14 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:14.875669 | orchestrator | 2025-09-19 07:22:14 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:14.875701 | orchestrator | 2025-09-19 07:22:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:17.924976 | orchestrator | 2025-09-19 07:22:17 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:17.927152 | orchestrator | 2025-09-19 07:22:17 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:17.928491 | orchestrator | 2025-09-19 07:22:17 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:17.930253 | orchestrator | 2025-09-19 07:22:17 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:17.930295 | orchestrator | 2025-09-19 07:22:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:21.012976 | orchestrator | 2025-09-19 07:22:21 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:21.013414 | orchestrator | 2025-09-19 07:22:21 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:21.014557 | orchestrator | 2025-09-19 07:22:21 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:21.015534 | orchestrator | 2025-09-19 07:22:21 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:21.015996 | orchestrator | 2025-09-19 07:22:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:24.064402 | orchestrator | 2025-09-19 07:22:24 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:24.069024 | orchestrator | 2025-09-19 07:22:24 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:24.069072 | orchestrator | 2025-09-19 07:22:24 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:24.075047 | orchestrator | 2025-09-19 07:22:24 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:24.075068 | orchestrator | 2025-09-19 07:22:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:27.113859 | orchestrator | 2025-09-19 07:22:27 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:27.114271 | orchestrator | 2025-09-19 07:22:27 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:27.115242 | orchestrator | 2025-09-19 07:22:27 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:27.116170 | orchestrator | 2025-09-19 07:22:27 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:27.116198 | orchestrator | 2025-09-19 07:22:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:30.167034 | orchestrator | 2025-09-19 07:22:30 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:30.167756 | orchestrator | 2025-09-19 07:22:30 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:30.169033 | orchestrator | 2025-09-19 07:22:30 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:30.170171 | orchestrator | 2025-09-19 07:22:30 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:30.170208 | orchestrator | 2025-09-19 07:22:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:33.211397 | orchestrator | 2025-09-19 07:22:33 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:33.211860 | orchestrator | 2025-09-19 07:22:33 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:33.213021 | orchestrator | 2025-09-19 07:22:33 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:33.214279 | orchestrator | 2025-09-19 07:22:33 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:33.214331 | orchestrator | 2025-09-19 07:22:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:36.244341 | orchestrator | 2025-09-19 07:22:36 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:36.244761 | orchestrator | 2025-09-19 07:22:36 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:36.245498 | orchestrator | 2025-09-19 07:22:36 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:36.246336 | orchestrator | 2025-09-19 07:22:36 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:36.246366 | orchestrator | 2025-09-19 07:22:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:39.287020 | orchestrator | 2025-09-19 07:22:39 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:39.289218 | orchestrator | 2025-09-19 07:22:39 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:39.291102 | orchestrator | 2025-09-19 07:22:39 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:39.293041 | orchestrator | 2025-09-19 07:22:39 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:39.293064 | orchestrator | 2025-09-19 07:22:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:42.324939 | orchestrator | 2025-09-19 07:22:42 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:42.325270 | orchestrator | 2025-09-19 07:22:42 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:42.327223 | orchestrator | 2025-09-19 07:22:42 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:42.328137 | orchestrator | 2025-09-19 07:22:42 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:42.328181 | orchestrator | 2025-09-19 07:22:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:45.369793 | orchestrator | 2025-09-19 07:22:45 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:45.369906 | orchestrator | 2025-09-19 07:22:45 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:45.370825 | orchestrator | 2025-09-19 07:22:45 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:45.372017 | orchestrator | 2025-09-19 07:22:45 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:45.372098 | orchestrator | 2025-09-19 07:22:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:48.409004 | orchestrator | 2025-09-19 07:22:48 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:48.409396 | orchestrator | 2025-09-19 07:22:48 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:48.410165 | orchestrator | 2025-09-19 07:22:48 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:48.410814 | orchestrator | 2025-09-19 07:22:48 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:48.410983 | orchestrator | 2025-09-19 07:22:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:51.444852 | orchestrator | 2025-09-19 07:22:51 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:51.445146 | orchestrator | 2025-09-19 07:22:51 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:51.445771 | orchestrator | 2025-09-19 07:22:51 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:51.447204 | orchestrator | 2025-09-19 07:22:51 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:51.447246 | orchestrator | 2025-09-19 07:22:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:54.493060 | orchestrator | 2025-09-19 07:22:54 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:54.493922 | orchestrator | 2025-09-19 07:22:54 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:54.493951 | orchestrator | 2025-09-19 07:22:54 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:54.494405 | orchestrator | 2025-09-19 07:22:54 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:54.494448 | orchestrator | 2025-09-19 07:22:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:22:57.523572 | orchestrator | 2025-09-19 07:22:57 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:22:57.525276 | orchestrator | 2025-09-19 07:22:57 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:22:57.527489 | orchestrator | 2025-09-19 07:22:57 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:22:57.528611 | orchestrator | 2025-09-19 07:22:57 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:22:57.528639 | orchestrator | 2025-09-19 07:22:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:00.580130 | orchestrator | 2025-09-19 07:23:00 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:23:00.581874 | orchestrator | 2025-09-19 07:23:00 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:23:00.585256 | orchestrator | 2025-09-19 07:23:00 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:00.587354 | orchestrator | 2025-09-19 07:23:00 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:23:00.587558 | orchestrator | 2025-09-19 07:23:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:03.669221 | orchestrator | 2025-09-19 07:23:03 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:23:03.671802 | orchestrator | 2025-09-19 07:23:03 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:23:03.672687 | orchestrator | 2025-09-19 07:23:03 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:03.674192 | orchestrator | 2025-09-19 07:23:03 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:23:03.674217 | orchestrator | 2025-09-19 07:23:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:06.709547 | orchestrator | 2025-09-19 07:23:06 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:23:06.710000 | orchestrator | 2025-09-19 07:23:06 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:23:06.710743 | orchestrator | 2025-09-19 07:23:06 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:06.711479 | orchestrator | 2025-09-19 07:23:06 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:23:06.711502 | orchestrator | 2025-09-19 07:23:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:09.757185 | orchestrator | 2025-09-19 07:23:09 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:23:09.758541 | orchestrator | 2025-09-19 07:23:09 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:23:09.760263 | orchestrator | 2025-09-19 07:23:09 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:09.761808 | orchestrator | 2025-09-19 07:23:09 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:23:09.761854 | orchestrator | 2025-09-19 07:23:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:12.806588 | orchestrator | 2025-09-19 07:23:12 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:23:12.808177 | orchestrator | 2025-09-19 07:23:12 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:23:12.808227 | orchestrator | 2025-09-19 07:23:12 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:12.809497 | orchestrator | 2025-09-19 07:23:12 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:23:12.809645 | orchestrator | 2025-09-19 07:23:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:15.849824 | orchestrator | 2025-09-19 07:23:15 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:23:15.849983 | orchestrator | 2025-09-19 07:23:15 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:23:15.851218 | orchestrator | 2025-09-19 07:23:15 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:15.852334 | orchestrator | 2025-09-19 07:23:15 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:23:15.852495 | orchestrator | 2025-09-19 07:23:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:18.888382 | orchestrator | 2025-09-19 07:23:18 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:23:18.888912 | orchestrator | 2025-09-19 07:23:18 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:23:18.889637 | orchestrator | 2025-09-19 07:23:18 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:18.890759 | orchestrator | 2025-09-19 07:23:18 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:23:18.890933 | orchestrator | 2025-09-19 07:23:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:21.924696 | orchestrator | 2025-09-19 07:23:21 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:23:21.924880 | orchestrator | 2025-09-19 07:23:21 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:23:21.925665 | orchestrator | 2025-09-19 07:23:21 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:21.926441 | orchestrator | 2025-09-19 07:23:21 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:23:21.926478 | orchestrator | 2025-09-19 07:23:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:24.964511 | orchestrator | 2025-09-19 07:23:24 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:23:24.966461 | orchestrator | 2025-09-19 07:23:24 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:23:24.968153 | orchestrator | 2025-09-19 07:23:24 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:24.969541 | orchestrator | 2025-09-19 07:23:24 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:23:24.969774 | orchestrator | 2025-09-19 07:23:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:28.002752 | orchestrator | 2025-09-19 07:23:28 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:23:28.005670 | orchestrator | 2025-09-19 07:23:28 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:23:28.007923 | orchestrator | 2025-09-19 07:23:28 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:28.010851 | orchestrator | 2025-09-19 07:23:28 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:23:28.010918 | orchestrator | 2025-09-19 07:23:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:31.058561 | orchestrator | 2025-09-19 07:23:31 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:23:31.060881 | orchestrator | 2025-09-19 07:23:31 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:23:31.063413 | orchestrator | 2025-09-19 07:23:31 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:31.066503 | orchestrator | 2025-09-19 07:23:31 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:23:31.066531 | orchestrator | 2025-09-19 07:23:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:34.108795 | orchestrator | 2025-09-19 07:23:34 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state STARTED 2025-09-19 07:23:34.109312 | orchestrator | 2025-09-19 07:23:34 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:23:34.110654 | orchestrator | 2025-09-19 07:23:34 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:34.111702 | orchestrator | 2025-09-19 07:23:34 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:23:34.111734 | orchestrator | 2025-09-19 07:23:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:37.157289 | orchestrator | 2025-09-19 07:23:37 | INFO  | Task a9029d8f-6f56-4cca-8811-85d281320d7f is in state SUCCESS 2025-09-19 07:23:37.160527 | orchestrator | 2025-09-19 07:23:37 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state STARTED 2025-09-19 07:23:37.160735 | orchestrator | 2025-09-19 07:23:37 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:37.162834 | orchestrator | 2025-09-19 07:23:37 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state STARTED 2025-09-19 07:23:37.163065 | orchestrator | 2025-09-19 07:23:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:40.233845 | orchestrator | 2025-09-19 07:23:40 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:23:40.239442 | orchestrator | 2025-09-19 07:23:40 | INFO  | Task 98d522dc-8cef-4546-98b4-92086ffd6426 is in state SUCCESS 2025-09-19 07:23:40.241188 | orchestrator | 2025-09-19 07:23:40.241286 | orchestrator | 2025-09-19 07:23:40.241302 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-19 07:23:40.241403 | orchestrator | 2025-09-19 07:23:40.241417 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-19 07:23:40.241436 | orchestrator | Friday 19 September 2025 07:21:51 +0000 (0:00:00.111) 0:00:00.111 ****** 2025-09-19 07:23:40.241454 | orchestrator | changed: [localhost] 2025-09-19 07:23:40.241566 | orchestrator | 2025-09-19 07:23:40.241582 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-19 07:23:40.241593 | orchestrator | Friday 19 September 2025 07:21:52 +0000 (0:00:00.983) 0:00:01.095 ****** 2025-09-19 07:23:40.241605 | orchestrator | changed: [localhost] 2025-09-19 07:23:40.241616 | orchestrator | 2025-09-19 07:23:40.241627 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-19 07:23:40.241638 | orchestrator | Friday 19 September 2025 07:22:26 +0000 (0:00:33.398) 0:00:34.493 ****** 2025-09-19 07:23:40.241649 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2025-09-19 07:23:40.241660 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2025-09-19 07:23:40.241671 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (1 retries left). 2025-09-19 07:23:40.241683 | orchestrator | changed: [localhost] 2025-09-19 07:23:40.241695 | orchestrator | 2025-09-19 07:23:40.241708 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:23:40.241720 | orchestrator | 2025-09-19 07:23:40.241759 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:23:40.241773 | orchestrator | Friday 19 September 2025 07:23:35 +0000 (0:01:09.619) 0:01:44.113 ****** 2025-09-19 07:23:40.241785 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:23:40.241797 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:23:40.241809 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:23:40.241822 | orchestrator | 2025-09-19 07:23:40.241834 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:23:40.241846 | orchestrator | Friday 19 September 2025 07:23:36 +0000 (0:00:00.322) 0:01:44.436 ****** 2025-09-19 07:23:40.241861 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-19 07:23:40.241881 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-19 07:23:40.241900 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-19 07:23:40.241919 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-19 07:23:40.241937 | orchestrator | 2025-09-19 07:23:40.241950 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-19 07:23:40.241962 | orchestrator | skipping: no hosts matched 2025-09-19 07:23:40.241976 | orchestrator | 2025-09-19 07:23:40.242006 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:23:40.242124 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:23:40.242141 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:23:40.242154 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:23:40.242165 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:23:40.242176 | orchestrator | 2025-09-19 07:23:40.242186 | orchestrator | 2025-09-19 07:23:40.242197 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:23:40.242208 | orchestrator | Friday 19 September 2025 07:23:36 +0000 (0:00:00.429) 0:01:44.865 ****** 2025-09-19 07:23:40.242219 | orchestrator | =============================================================================== 2025-09-19 07:23:40.242230 | orchestrator | Download ironic-agent kernel ------------------------------------------- 69.62s 2025-09-19 07:23:40.242240 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 33.40s 2025-09-19 07:23:40.242265 | orchestrator | Ensure the destination directory exists --------------------------------- 0.98s 2025-09-19 07:23:40.242276 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-09-19 07:23:40.242286 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-09-19 07:23:40.242297 | orchestrator | 2025-09-19 07:23:40.242308 | orchestrator | 2025-09-19 07:23:40.242318 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:23:40.242329 | orchestrator | 2025-09-19 07:23:40.242340 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:23:40.242350 | orchestrator | Friday 19 September 2025 07:19:32 +0000 (0:00:00.298) 0:00:00.298 ****** 2025-09-19 07:23:40.242361 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:23:40.242372 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:23:40.242383 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:23:40.242393 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:23:40.242405 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:23:40.242416 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:23:40.242426 | orchestrator | 2025-09-19 07:23:40.242445 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:23:40.242463 | orchestrator | Friday 19 September 2025 07:19:33 +0000 (0:00:00.810) 0:00:01.108 ****** 2025-09-19 07:23:40.242482 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-19 07:23:40.242516 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-19 07:23:40.242533 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-19 07:23:40.242551 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-19 07:23:40.242569 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-19 07:23:40.242604 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-19 07:23:40.242623 | orchestrator | 2025-09-19 07:23:40.242664 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-19 07:23:40.242683 | orchestrator | 2025-09-19 07:23:40.242702 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 07:23:40.242722 | orchestrator | Friday 19 September 2025 07:19:34 +0000 (0:00:00.652) 0:00:01.760 ****** 2025-09-19 07:23:40.242741 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:23:40.242759 | orchestrator | 2025-09-19 07:23:40.242777 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-19 07:23:40.242793 | orchestrator | Friday 19 September 2025 07:19:35 +0000 (0:00:01.185) 0:00:02.946 ****** 2025-09-19 07:23:40.242810 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:23:40.242828 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:23:40.242844 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:23:40.242863 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:23:40.242881 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:23:40.242899 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:23:40.242916 | orchestrator | 2025-09-19 07:23:40.242933 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-19 07:23:40.242951 | orchestrator | Friday 19 September 2025 07:19:36 +0000 (0:00:01.268) 0:00:04.215 ****** 2025-09-19 07:23:40.242968 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:23:40.242987 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:23:40.243004 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:23:40.243021 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:23:40.243038 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:23:40.243056 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:23:40.243073 | orchestrator | 2025-09-19 07:23:40.243120 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-19 07:23:40.243137 | orchestrator | Friday 19 September 2025 07:19:37 +0000 (0:00:01.162) 0:00:05.377 ****** 2025-09-19 07:23:40.243153 | orchestrator | ok: [testbed-node-0] => { 2025-09-19 07:23:40.243172 | orchestrator |  "changed": false, 2025-09-19 07:23:40.243190 | orchestrator |  "msg": "All assertions passed" 2025-09-19 07:23:40.243206 | orchestrator | } 2025-09-19 07:23:40.243223 | orchestrator | ok: [testbed-node-1] => { 2025-09-19 07:23:40.243240 | orchestrator |  "changed": false, 2025-09-19 07:23:40.243257 | orchestrator |  "msg": "All assertions passed" 2025-09-19 07:23:40.243273 | orchestrator | } 2025-09-19 07:23:40.243290 | orchestrator | ok: [testbed-node-2] => { 2025-09-19 07:23:40.243307 | orchestrator |  "changed": false, 2025-09-19 07:23:40.243323 | orchestrator |  "msg": "All assertions passed" 2025-09-19 07:23:40.243340 | orchestrator | } 2025-09-19 07:23:40.243356 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 07:23:40.243372 | orchestrator |  "changed": false, 2025-09-19 07:23:40.243389 | orchestrator |  "msg": "All assertions passed" 2025-09-19 07:23:40.243405 | orchestrator | } 2025-09-19 07:23:40.243476 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 07:23:40.243497 | orchestrator |  "changed": false, 2025-09-19 07:23:40.243515 | orchestrator |  "msg": "All assertions passed" 2025-09-19 07:23:40.243531 | orchestrator | } 2025-09-19 07:23:40.243551 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 07:23:40.243570 | orchestrator |  "changed": false, 2025-09-19 07:23:40.243589 | orchestrator |  "msg": "All assertions passed" 2025-09-19 07:23:40.243607 | orchestrator | } 2025-09-19 07:23:40.243627 | orchestrator | 2025-09-19 07:23:40.243646 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-19 07:23:40.243686 | orchestrator | Friday 19 September 2025 07:19:38 +0000 (0:00:00.779) 0:00:06.157 ****** 2025-09-19 07:23:40.243704 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.243723 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.243801 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.243821 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.243838 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.243855 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.243871 | orchestrator | 2025-09-19 07:23:40.243889 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-19 07:23:40.243907 | orchestrator | Friday 19 September 2025 07:19:39 +0000 (0:00:00.617) 0:00:06.775 ****** 2025-09-19 07:23:40.243925 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-19 07:23:40.243943 | orchestrator | 2025-09-19 07:23:40.243961 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-19 07:23:40.243978 | orchestrator | Friday 19 September 2025 07:19:43 +0000 (0:00:03.875) 0:00:10.651 ****** 2025-09-19 07:23:40.243996 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-19 07:23:40.244027 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-19 07:23:40.244045 | orchestrator | 2025-09-19 07:23:40.244061 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-19 07:23:40.244200 | orchestrator | Friday 19 September 2025 07:19:49 +0000 (0:00:06.782) 0:00:17.433 ****** 2025-09-19 07:23:40.244257 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:23:40.244279 | orchestrator | 2025-09-19 07:23:40.244298 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-19 07:23:40.244315 | orchestrator | Friday 19 September 2025 07:19:53 +0000 (0:00:03.369) 0:00:20.802 ****** 2025-09-19 07:23:40.244333 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:23:40.244352 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-19 07:23:40.244370 | orchestrator | 2025-09-19 07:23:40.244387 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-19 07:23:40.244405 | orchestrator | Friday 19 September 2025 07:19:57 +0000 (0:00:03.822) 0:00:24.625 ****** 2025-09-19 07:23:40.244423 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:23:40.244441 | orchestrator | 2025-09-19 07:23:40.244460 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-19 07:23:40.244476 | orchestrator | Friday 19 September 2025 07:20:00 +0000 (0:00:03.481) 0:00:28.106 ****** 2025-09-19 07:23:40.244492 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-19 07:23:40.244509 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-19 07:23:40.244525 | orchestrator | 2025-09-19 07:23:40.244558 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 07:23:40.244572 | orchestrator | Friday 19 September 2025 07:20:08 +0000 (0:00:08.093) 0:00:36.200 ****** 2025-09-19 07:23:40.244582 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.244725 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.244735 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.244745 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.244754 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.244764 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.244773 | orchestrator | 2025-09-19 07:23:40.244783 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-19 07:23:40.244792 | orchestrator | Friday 19 September 2025 07:20:09 +0000 (0:00:00.606) 0:00:36.806 ****** 2025-09-19 07:23:40.244802 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.244811 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.244821 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.244844 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.244854 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.244863 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.244872 | orchestrator | 2025-09-19 07:23:40.244882 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-19 07:23:40.244891 | orchestrator | Friday 19 September 2025 07:20:11 +0000 (0:00:01.748) 0:00:38.554 ****** 2025-09-19 07:23:40.244901 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:23:40.244911 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:23:40.244920 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:23:40.244930 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:23:40.244939 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:23:40.244948 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:23:40.244958 | orchestrator | 2025-09-19 07:23:40.244967 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-19 07:23:40.244977 | orchestrator | Friday 19 September 2025 07:20:12 +0000 (0:00:01.805) 0:00:40.360 ****** 2025-09-19 07:23:40.244987 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.244996 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.245006 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.245015 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.245024 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.245033 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.245043 | orchestrator | 2025-09-19 07:23:40.245052 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-19 07:23:40.245062 | orchestrator | Friday 19 September 2025 07:20:15 +0000 (0:00:02.108) 0:00:42.469 ****** 2025-09-19 07:23:40.245076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.245125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.245149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.245169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.245181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.245191 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.245201 | orchestrator | 2025-09-19 07:23:40.245211 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-19 07:23:40.245220 | orchestrator | Friday 19 September 2025 07:20:18 +0000 (0:00:03.389) 0:00:45.858 ****** 2025-09-19 07:23:40.245230 | orchestrator | [WARNING]: Skipped 2025-09-19 07:23:40.245240 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-19 07:23:40.245250 | orchestrator | due to this access issue: 2025-09-19 07:23:40.245259 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-19 07:23:40.245285 | orchestrator | a directory 2025-09-19 07:23:40.245301 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:23:40.245311 | orchestrator | 2025-09-19 07:23:40.245320 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 07:23:40.245339 | orchestrator | Friday 19 September 2025 07:20:19 +0000 (0:00:00.857) 0:00:46.716 ****** 2025-09-19 07:23:40.245350 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:23:40.245361 | orchestrator | 2025-09-19 07:23:40.245370 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-19 07:23:40.245380 | orchestrator | Friday 19 September 2025 07:20:20 +0000 (0:00:00.935) 0:00:47.652 ****** 2025-09-19 07:23:40.245442 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.245454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.245464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.245474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.245489 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.245513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.245523 | orchestrator | 2025-09-19 07:23:40.245532 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-19 07:23:40.245542 | orchestrator | Friday 19 September 2025 07:20:22 +0000 (0:00:02.728) 0:00:50.380 ****** 2025-09-19 07:23:40.245552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.245562 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.245572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.245582 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.245593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.245607 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.245623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.245633 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.245649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.245660 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.245669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.245679 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.245689 | orchestrator | 2025-09-19 07:23:40.245698 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-19 07:23:40.245708 | orchestrator | Friday 19 September 2025 07:20:25 +0000 (0:00:02.450) 0:00:52.831 ****** 2025-09-19 07:23:40.245718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.245728 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.245746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.245762 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.245779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.245789 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.245799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.245809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.245819 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.245828 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.245838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.245853 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.245863 | orchestrator | 2025-09-19 07:23:40.245872 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-19 07:23:40.245882 | orchestrator | Friday 19 September 2025 07:20:28 +0000 (0:00:03.231) 0:00:56.063 ****** 2025-09-19 07:23:40.245891 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.245901 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.245910 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.245920 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.245929 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.245942 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.245952 | orchestrator | 2025-09-19 07:23:40.245961 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-19 07:23:40.245971 | orchestrator | Friday 19 September 2025 07:20:31 +0000 (0:00:02.609) 0:00:58.672 ****** 2025-09-19 07:23:40.245981 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.245990 | orchestrator | 2025-09-19 07:23:40.246000 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-19 07:23:40.246009 | orchestrator | Friday 19 September 2025 07:20:31 +0000 (0:00:00.125) 0:00:58.797 ****** 2025-09-19 07:23:40.246062 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.246072 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.246137 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.246148 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.246157 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.246166 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.246176 | orchestrator | 2025-09-19 07:23:40.246185 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-19 07:23:40.246195 | orchestrator | Friday 19 September 2025 07:20:31 +0000 (0:00:00.538) 0:00:59.335 ****** 2025-09-19 07:23:40.246218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.246236 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.246252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.246268 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.246296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.246313 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.246336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.246347 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.246357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.246367 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.246383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.246393 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.246403 | orchestrator | 2025-09-19 07:23:40.246412 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-19 07:23:40.246421 | orchestrator | Friday 19 September 2025 07:20:34 +0000 (0:00:02.358) 0:01:01.694 ****** 2025-09-19 07:23:40.246431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.246448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.246462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.246478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.246489 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.246499 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.246515 | orchestrator | 2025-09-19 07:23:40.246525 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-19 07:23:40.246535 | orchestrator | Friday 19 September 2025 07:20:38 +0000 (0:00:03.839) 0:01:05.533 ****** 2025-09-19 07:23:40.246544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.246559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.246577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.246587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.246603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.246613 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.246623 | orchestrator | 2025-09-19 07:23:40.246633 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-19 07:23:40.246643 | orchestrator | Friday 19 September 2025 07:20:44 +0000 (0:00:06.637) 0:01:12.171 ****** 2025-09-19 07:23:40.246657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.246667 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.246692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.246701 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.246714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.246722 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.246730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.246738 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.246750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.246758 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.246766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.246774 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.246782 | orchestrator | 2025-09-19 07:23:40.246789 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-19 07:23:40.246797 | orchestrator | Friday 19 September 2025 07:20:48 +0000 (0:00:03.767) 0:01:15.939 ****** 2025-09-19 07:23:40.246805 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.246817 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.246825 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.246833 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:40.246841 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:23:40.246853 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:23:40.246861 | orchestrator | 2025-09-19 07:23:40.246870 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-19 07:23:40.246883 | orchestrator | Friday 19 September 2025 07:20:51 +0000 (0:00:03.128) 0:01:19.068 ****** 2025-09-19 07:23:40.246896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.246910 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.246924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.246937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.246951 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.246960 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.246975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.246990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.247052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.247123 | orchestrator | 2025-09-19 07:23:40.247134 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-19 07:23:40.247142 | orchestrator | Friday 19 September 2025 07:20:55 +0000 (0:00:03.519) 0:01:22.587 ****** 2025-09-19 07:23:40.247150 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.247158 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.247165 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.247173 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.247181 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.247188 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.247196 | orchestrator | 2025-09-19 07:23:40.247203 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-19 07:23:40.247211 | orchestrator | Friday 19 September 2025 07:20:57 +0000 (0:00:02.189) 0:01:24.777 ****** 2025-09-19 07:23:40.247219 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.247273 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.247282 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.247290 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.247298 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.247305 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.247313 | orchestrator | 2025-09-19 07:23:40.247321 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-19 07:23:40.247329 | orchestrator | Friday 19 September 2025 07:20:59 +0000 (0:00:02.532) 0:01:27.310 ****** 2025-09-19 07:23:40.247337 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.247344 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.247352 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.247360 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.247378 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.247385 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.247393 | orchestrator | 2025-09-19 07:23:40.247401 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-19 07:23:40.247409 | orchestrator | Friday 19 September 2025 07:21:02 +0000 (0:00:02.752) 0:01:30.062 ****** 2025-09-19 07:23:40.247416 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.247424 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.247432 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.247439 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.247447 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.247455 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.247468 | orchestrator | 2025-09-19 07:23:40.247476 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-19 07:23:40.247488 | orchestrator | Friday 19 September 2025 07:21:04 +0000 (0:00:01.908) 0:01:31.970 ****** 2025-09-19 07:23:40.247497 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.247504 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.247512 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.247520 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.247527 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.247535 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.247543 | orchestrator | 2025-09-19 07:23:40.247550 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-19 07:23:40.247558 | orchestrator | Friday 19 September 2025 07:21:06 +0000 (0:00:02.246) 0:01:34.217 ****** 2025-09-19 07:23:40.247566 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.247573 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.247581 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.247589 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.247596 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.247604 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.247611 | orchestrator | 2025-09-19 07:23:40.247619 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-19 07:23:40.247627 | orchestrator | Friday 19 September 2025 07:21:09 +0000 (0:00:02.516) 0:01:36.733 ****** 2025-09-19 07:23:40.247646 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 07:23:40.247654 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.247662 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 07:23:40.247670 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.247684 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 07:23:40.247717 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.247725 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 07:23:40.247733 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.247741 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 07:23:40.247749 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.247756 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 07:23:40.247764 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.247819 | orchestrator | 2025-09-19 07:23:40.247827 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-19 07:23:40.247835 | orchestrator | Friday 19 September 2025 07:21:12 +0000 (0:00:03.542) 0:01:40.276 ****** 2025-09-19 07:23:40.247844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.247852 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.247860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.247874 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.247886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.247895 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.247928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.247938 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.247946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.247954 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.247962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.247976 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.247984 | orchestrator | 2025-09-19 07:23:40.247992 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-19 07:23:40.248000 | orchestrator | Friday 19 September 2025 07:21:15 +0000 (0:00:02.473) 0:01:42.750 ****** 2025-09-19 07:23:40.248008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.248016 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.248028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.248036 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.248051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.248059 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.248067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.248101 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.248111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.248119 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.248127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.248135 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.248142 | orchestrator | 2025-09-19 07:23:40.248154 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-19 07:23:40.248162 | orchestrator | Friday 19 September 2025 07:21:17 +0000 (0:00:02.031) 0:01:44.782 ****** 2025-09-19 07:23:40.248169 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.248177 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.248185 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.248193 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.248200 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.248208 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.248216 | orchestrator | 2025-09-19 07:23:40.248223 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-19 07:23:40.248231 | orchestrator | Friday 19 September 2025 07:21:19 +0000 (0:00:01.743) 0:01:46.525 ****** 2025-09-19 07:23:40.248239 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.248246 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.248254 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.248261 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:23:40.248269 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:23:40.248276 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:23:40.248284 | orchestrator | 2025-09-19 07:23:40.248292 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-19 07:23:40.248300 | orchestrator | Friday 19 September 2025 07:21:22 +0000 (0:00:03.487) 0:01:50.013 ****** 2025-09-19 07:23:40.248307 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.248315 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.248322 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.248330 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.248338 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.248345 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.248353 | orchestrator | 2025-09-19 07:23:40.248365 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-19 07:23:40.248373 | orchestrator | Friday 19 September 2025 07:21:27 +0000 (0:00:04.591) 0:01:54.605 ****** 2025-09-19 07:23:40.248386 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.248394 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.248402 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.248409 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.248417 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.248425 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.248435 | orchestrator | 2025-09-19 07:23:40.248448 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-19 07:23:40.248461 | orchestrator | Friday 19 September 2025 07:21:30 +0000 (0:00:03.005) 0:01:57.610 ****** 2025-09-19 07:23:40.248474 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.248486 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.248498 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.248511 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.248525 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.248538 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.248550 | orchestrator | 2025-09-19 07:23:40.248563 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-19 07:23:40.248571 | orchestrator | Friday 19 September 2025 07:21:32 +0000 (0:00:02.022) 0:01:59.632 ****** 2025-09-19 07:23:40.248579 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.248586 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.248594 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.248601 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.248609 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.248616 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.248624 | orchestrator | 2025-09-19 07:23:40.248632 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-19 07:23:40.248639 | orchestrator | Friday 19 September 2025 07:21:34 +0000 (0:00:02.427) 0:02:02.060 ****** 2025-09-19 07:23:40.248647 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.248655 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.248662 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.248670 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.248678 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.248685 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.248693 | orchestrator | 2025-09-19 07:23:40.248701 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-19 07:23:40.248708 | orchestrator | Friday 19 September 2025 07:21:37 +0000 (0:00:02.564) 0:02:04.624 ****** 2025-09-19 07:23:40.248716 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.248723 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.248731 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.248739 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.248746 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.248754 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.248762 | orchestrator | 2025-09-19 07:23:40.248769 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-19 07:23:40.248777 | orchestrator | Friday 19 September 2025 07:21:40 +0000 (0:00:03.570) 0:02:08.195 ****** 2025-09-19 07:23:40.248785 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.248792 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.248800 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.248808 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.248815 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.248823 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.248830 | orchestrator | 2025-09-19 07:23:40.248838 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-19 07:23:40.248846 | orchestrator | Friday 19 September 2025 07:21:42 +0000 (0:00:02.175) 0:02:10.370 ****** 2025-09-19 07:23:40.248853 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 07:23:40.248867 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.248875 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 07:23:40.248883 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.248891 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 07:23:40.248898 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.248912 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 07:23:40.248926 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.248939 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 07:23:40.248951 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.248964 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 07:23:40.248976 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.248988 | orchestrator | 2025-09-19 07:23:40.249000 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-19 07:23:40.249011 | orchestrator | Friday 19 September 2025 07:21:45 +0000 (0:00:02.644) 0:02:13.015 ****** 2025-09-19 07:23:40.249032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.249045 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.249057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.249069 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.249136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 07:23:40.249164 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.249177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.249195 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.249209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.249223 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.249246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 07:23:40.249260 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.249273 | orchestrator | 2025-09-19 07:23:40.249286 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-19 07:23:40.249298 | orchestrator | Friday 19 September 2025 07:21:48 +0000 (0:00:02.752) 0:02:15.768 ****** 2025-09-19 07:23:40.249308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.249319 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.249347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.249360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 07:23:40.249379 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.249391 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 07:23:40.249403 | orchestrator | 2025-09-19 07:23:40.249414 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 07:23:40.249433 | orchestrator | Friday 19 September 2025 07:21:51 +0000 (0:00:03.636) 0:02:19.405 ****** 2025-09-19 07:23:40.249443 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.249450 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.249456 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.249463 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:23:40.249469 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:23:40.249476 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:23:40.249482 | orchestrator | 2025-09-19 07:23:40.249489 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-19 07:23:40.249495 | orchestrator | Friday 19 September 2025 07:21:52 +0000 (0:00:00.615) 0:02:20.021 ****** 2025-09-19 07:23:40.249503 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:40.249515 | orchestrator | 2025-09-19 07:23:40.249525 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-19 07:23:40.249536 | orchestrator | Friday 19 September 2025 07:21:54 +0000 (0:00:01.899) 0:02:21.921 ****** 2025-09-19 07:23:40.249547 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:40.249557 | orchestrator | 2025-09-19 07:23:40.249568 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-19 07:23:40.249579 | orchestrator | Friday 19 September 2025 07:21:56 +0000 (0:00:02.096) 0:02:24.017 ****** 2025-09-19 07:23:40.249589 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:40.249599 | orchestrator | 2025-09-19 07:23:40.249609 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 07:23:40.249620 | orchestrator | Friday 19 September 2025 07:22:44 +0000 (0:00:48.119) 0:03:12.137 ****** 2025-09-19 07:23:40.249630 | orchestrator | 2025-09-19 07:23:40.249640 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 07:23:40.249651 | orchestrator | Friday 19 September 2025 07:22:44 +0000 (0:00:00.078) 0:03:12.215 ****** 2025-09-19 07:23:40.249661 | orchestrator | 2025-09-19 07:23:40.249672 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 07:23:40.249683 | orchestrator | Friday 19 September 2025 07:22:45 +0000 (0:00:00.404) 0:03:12.620 ****** 2025-09-19 07:23:40.249693 | orchestrator | 2025-09-19 07:23:40.249709 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 07:23:40.249721 | orchestrator | Friday 19 September 2025 07:22:45 +0000 (0:00:00.070) 0:03:12.691 ****** 2025-09-19 07:23:40.249731 | orchestrator | 2025-09-19 07:23:40.249741 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 07:23:40.249752 | orchestrator | Friday 19 September 2025 07:22:45 +0000 (0:00:00.070) 0:03:12.761 ****** 2025-09-19 07:23:40.249763 | orchestrator | 2025-09-19 07:23:40.249774 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 07:23:40.249785 | orchestrator | Friday 19 September 2025 07:22:45 +0000 (0:00:00.108) 0:03:12.870 ****** 2025-09-19 07:23:40.249795 | orchestrator | 2025-09-19 07:23:40.249806 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-19 07:23:40.249817 | orchestrator | Friday 19 September 2025 07:22:45 +0000 (0:00:00.121) 0:03:12.992 ****** 2025-09-19 07:23:40.249827 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:40.249837 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:23:40.249847 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:23:40.249857 | orchestrator | 2025-09-19 07:23:40.249866 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-19 07:23:40.249876 | orchestrator | Friday 19 September 2025 07:23:14 +0000 (0:00:29.006) 0:03:41.999 ****** 2025-09-19 07:23:40.249886 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:23:40.249896 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:23:40.249907 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:23:40.249917 | orchestrator | 2025-09-19 07:23:40.249927 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:23:40.249947 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 07:23:40.249968 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-19 07:23:40.249978 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-19 07:23:40.249989 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 07:23:40.249999 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 07:23:40.250009 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 07:23:40.250063 | orchestrator | 2025-09-19 07:23:40.250076 | orchestrator | 2025-09-19 07:23:40.250109 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:23:40.250119 | orchestrator | Friday 19 September 2025 07:23:38 +0000 (0:00:23.808) 0:04:05.807 ****** 2025-09-19 07:23:40.250129 | orchestrator | =============================================================================== 2025-09-19 07:23:40.250140 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 48.12s 2025-09-19 07:23:40.250151 | orchestrator | neutron : Restart neutron-server container ----------------------------- 29.01s 2025-09-19 07:23:40.250162 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 23.81s 2025-09-19 07:23:40.250173 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.09s 2025-09-19 07:23:40.250184 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.78s 2025-09-19 07:23:40.250194 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.64s 2025-09-19 07:23:40.250205 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 4.59s 2025-09-19 07:23:40.250216 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.88s 2025-09-19 07:23:40.250227 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.84s 2025-09-19 07:23:40.250238 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.82s 2025-09-19 07:23:40.250248 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.77s 2025-09-19 07:23:40.250259 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.64s 2025-09-19 07:23:40.250271 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.57s 2025-09-19 07:23:40.250281 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 3.54s 2025-09-19 07:23:40.250293 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.52s 2025-09-19 07:23:40.250303 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.49s 2025-09-19 07:23:40.250314 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.48s 2025-09-19 07:23:40.250324 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.39s 2025-09-19 07:23:40.250331 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.37s 2025-09-19 07:23:40.250338 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.23s 2025-09-19 07:23:40.250345 | orchestrator | 2025-09-19 07:23:40 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:40.250358 | orchestrator | 2025-09-19 07:23:40 | INFO  | Task 624d5c88-e199-4b11-b8eb-710c587ab997 is in state SUCCESS 2025-09-19 07:23:40.250365 | orchestrator | 2025-09-19 07:23:40.250372 | orchestrator | 2025-09-19 07:23:40.250378 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:23:40.250393 | orchestrator | 2025-09-19 07:23:40.250399 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:23:40.250406 | orchestrator | Friday 19 September 2025 07:20:34 +0000 (0:00:00.289) 0:00:00.289 ****** 2025-09-19 07:23:40.250412 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:23:40.250419 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:23:40.250426 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:23:40.250432 | orchestrator | 2025-09-19 07:23:40.250439 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:23:40.250445 | orchestrator | Friday 19 September 2025 07:20:34 +0000 (0:00:00.258) 0:00:00.547 ****** 2025-09-19 07:23:40.250452 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-19 07:23:40.250458 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-19 07:23:40.250465 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-19 07:23:40.250471 | orchestrator | 2025-09-19 07:23:40.250478 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-19 07:23:40.250485 | orchestrator | 2025-09-19 07:23:40.250491 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 07:23:40.250497 | orchestrator | Friday 19 September 2025 07:20:35 +0000 (0:00:00.919) 0:00:01.467 ****** 2025-09-19 07:23:40.250504 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:23:40.250511 | orchestrator | 2025-09-19 07:23:40.250534 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-19 07:23:40.250542 | orchestrator | Friday 19 September 2025 07:20:36 +0000 (0:00:00.993) 0:00:02.461 ****** 2025-09-19 07:23:40.250548 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-19 07:23:40.250555 | orchestrator | 2025-09-19 07:23:40.250561 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-19 07:23:40.250568 | orchestrator | Friday 19 September 2025 07:20:40 +0000 (0:00:03.821) 0:00:06.282 ****** 2025-09-19 07:23:40.250574 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-19 07:23:40.250581 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-19 07:23:40.250588 | orchestrator | 2025-09-19 07:23:40.250594 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-19 07:23:40.250601 | orchestrator | Friday 19 September 2025 07:20:47 +0000 (0:00:07.090) 0:00:13.373 ****** 2025-09-19 07:23:40.250607 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:23:40.250614 | orchestrator | 2025-09-19 07:23:40.250620 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-19 07:23:40.250627 | orchestrator | Friday 19 September 2025 07:20:50 +0000 (0:00:03.509) 0:00:16.883 ****** 2025-09-19 07:23:40.250633 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:23:40.250640 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-19 07:23:40.250646 | orchestrator | 2025-09-19 07:23:40.250653 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-19 07:23:40.250659 | orchestrator | Friday 19 September 2025 07:20:54 +0000 (0:00:03.987) 0:00:20.870 ****** 2025-09-19 07:23:40.250666 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:23:40.250672 | orchestrator | 2025-09-19 07:23:40.250679 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-19 07:23:40.250685 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:03.429) 0:00:24.300 ****** 2025-09-19 07:23:40.250692 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-19 07:23:40.250698 | orchestrator | 2025-09-19 07:23:40.250705 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-19 07:23:40.250711 | orchestrator | Friday 19 September 2025 07:21:02 +0000 (0:00:04.198) 0:00:28.498 ****** 2025-09-19 07:23:40.250720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.250739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.250752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.250760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.250900 | orchestrator | 2025-09-19 07:23:40.250907 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-19 07:23:40.250914 | orchestrator | Friday 19 September 2025 07:21:05 +0000 (0:00:02.907) 0:00:31.405 ****** 2025-09-19 07:23:40.250930 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.250941 | orchestrator | 2025-09-19 07:23:40.250952 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-19 07:23:40.250962 | orchestrator | Friday 19 September 2025 07:21:05 +0000 (0:00:00.264) 0:00:31.670 ****** 2025-09-19 07:23:40.250972 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.250983 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.250993 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.251005 | orchestrator | 2025-09-19 07:23:40.251017 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 07:23:40.251028 | orchestrator | Friday 19 September 2025 07:21:05 +0000 (0:00:00.422) 0:00:32.092 ****** 2025-09-19 07:23:40.251038 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:23:40.251045 | orchestrator | 2025-09-19 07:23:40.251051 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-19 07:23:40.251057 | orchestrator | Friday 19 September 2025 07:21:06 +0000 (0:00:00.748) 0:00:32.841 ****** 2025-09-19 07:23:40.251065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.251076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.251112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.251120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251260 | orchestrator | 2025-09-19 07:23:40.251267 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-19 07:23:40.251274 | orchestrator | Friday 19 September 2025 07:21:13 +0000 (0:00:07.207) 0:00:40.049 ****** 2025-09-19 07:23:40.251281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.251291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:23:40.251298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251334 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.251341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.251348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:23:40.251358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.251370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:23:40.251395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251418 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.251425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251455 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.251462 | orchestrator | 2025-09-19 07:23:40.251469 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-19 07:23:40.251475 | orchestrator | Friday 19 September 2025 07:21:15 +0000 (0:00:01.191) 0:00:41.240 ****** 2025-09-19 07:23:40.251482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.251489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:23:40.251499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251537 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.251543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.251550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:23:40.251560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251599 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.251606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.251613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:23:40.251620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.251662 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.251668 | orchestrator | 2025-09-19 07:23:40.251675 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-19 07:23:40.251682 | orchestrator | Friday 19 September 2025 07:21:16 +0000 (0:00:01.382) 0:00:42.623 ****** 2025-09-19 07:23:40.251688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.251695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.251705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.251717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.251853 | orchestrator | 2025-09-19 07:23:40.251864 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-19 07:23:40.251874 | orchestrator | Friday 19 September 2025 07:21:23 +0000 (0:00:06.638) 0:00:49.262 ****** 2025-09-19 07:23:40.251885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.251896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.251908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.252098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252239 | orchestrator | 2025-09-19 07:23:40.252246 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-19 07:23:40.252253 | orchestrator | Friday 19 September 2025 07:21:45 +0000 (0:00:22.241) 0:01:11.503 ****** 2025-09-19 07:23:40.252260 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 07:23:40.252267 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 07:23:40.252273 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 07:23:40.252280 | orchestrator | 2025-09-19 07:23:40.252287 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-19 07:23:40.252293 | orchestrator | Friday 19 September 2025 07:21:51 +0000 (0:00:06.454) 0:01:17.958 ****** 2025-09-19 07:23:40.252300 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 07:23:40.252306 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 07:23:40.252313 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 07:23:40.252319 | orchestrator | 2025-09-19 07:23:40.252326 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-19 07:23:40.252333 | orchestrator | Friday 19 September 2025 07:21:54 +0000 (0:00:02.737) 0:01:20.695 ****** 2025-09-19 07:23:40.252339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.252351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.252368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.252376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252524 | orchestrator | 2025-09-19 07:23:40.252530 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-19 07:23:40.252537 | orchestrator | Friday 19 September 2025 07:21:56 +0000 (0:00:02.488) 0:01:23.184 ****** 2025-09-19 07:23:40.252544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.252555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.252562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.252576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.252704 | orchestrator | 2025-09-19 07:23:40.252712 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 07:23:40.252719 | orchestrator | Friday 19 September 2025 07:22:00 +0000 (0:00:03.374) 0:01:26.558 ****** 2025-09-19 07:23:40.252727 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.252734 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.252742 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.252749 | orchestrator | 2025-09-19 07:23:40.252757 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-19 07:23:40.252769 | orchestrator | Friday 19 September 2025 07:22:00 +0000 (0:00:00.287) 0:01:26.845 ****** 2025-09-19 07:23:40.252777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.252786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:23:40.252794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252837 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.252843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.252850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:23:40.252857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252897 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.252904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 07:23:40.252911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 07:23:40.252918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:23:40.252981 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.252992 | orchestrator | 2025-09-19 07:23:40.253003 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-19 07:23:40.253014 | orchestrator | Friday 19 September 2025 07:22:02 +0000 (0:00:01.706) 0:01:28.552 ****** 2025-09-19 07:23:40.253025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.253037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.253046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 07:23:40.253053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:23:40.253233 | orchestrator | 2025-09-19 07:23:40.253239 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 07:23:40.253246 | orchestrator | Friday 19 September 2025 07:22:07 +0000 (0:00:05.163) 0:01:33.716 ****** 2025-09-19 07:23:40.253253 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:23:40.253259 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:23:40.253266 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:23:40.253275 | orchestrator | 2025-09-19 07:23:40.253286 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-19 07:23:40.253297 | orchestrator | Friday 19 September 2025 07:22:07 +0000 (0:00:00.447) 0:01:34.164 ****** 2025-09-19 07:23:40.253307 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-19 07:23:40.253317 | orchestrator | 2025-09-19 07:23:40.253326 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-19 07:23:40.253336 | orchestrator | Friday 19 September 2025 07:22:10 +0000 (0:00:02.460) 0:01:36.624 ****** 2025-09-19 07:23:40.253345 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:23:40.253355 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-19 07:23:40.253364 | orchestrator | 2025-09-19 07:23:40.253374 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-19 07:23:40.253384 | orchestrator | Friday 19 September 2025 07:22:12 +0000 (0:00:02.400) 0:01:39.024 ****** 2025-09-19 07:23:40.253394 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:40.253404 | orchestrator | 2025-09-19 07:23:40.253414 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 07:23:40.253424 | orchestrator | Friday 19 September 2025 07:22:30 +0000 (0:00:18.015) 0:01:57.039 ****** 2025-09-19 07:23:40.253433 | orchestrator | 2025-09-19 07:23:40.253445 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 07:23:40.253455 | orchestrator | Friday 19 September 2025 07:22:31 +0000 (0:00:00.277) 0:01:57.317 ****** 2025-09-19 07:23:40.253466 | orchestrator | 2025-09-19 07:23:40.253476 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 07:23:40.253486 | orchestrator | Friday 19 September 2025 07:22:31 +0000 (0:00:00.078) 0:01:57.395 ****** 2025-09-19 07:23:40.253495 | orchestrator | 2025-09-19 07:23:40.253505 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-19 07:23:40.253514 | orchestrator | Friday 19 September 2025 07:22:31 +0000 (0:00:00.070) 0:01:57.466 ****** 2025-09-19 07:23:40.253524 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:23:40.253533 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:40.253542 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:23:40.253551 | orchestrator | 2025-09-19 07:23:40.253561 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-19 07:23:40.253570 | orchestrator | Friday 19 September 2025 07:22:44 +0000 (0:00:12.773) 0:02:10.239 ****** 2025-09-19 07:23:40.253580 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:40.253590 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:23:40.253599 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:23:40.253609 | orchestrator | 2025-09-19 07:23:40.253619 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-19 07:23:40.253629 | orchestrator | Friday 19 September 2025 07:22:51 +0000 (0:00:07.150) 0:02:17.390 ****** 2025-09-19 07:23:40.253639 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:23:40.253648 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:23:40.253658 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:40.253668 | orchestrator | 2025-09-19 07:23:40.253677 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-19 07:23:40.253698 | orchestrator | Friday 19 September 2025 07:23:02 +0000 (0:00:10.927) 0:02:28.317 ****** 2025-09-19 07:23:40.253707 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:40.253718 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:23:40.253727 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:23:40.253737 | orchestrator | 2025-09-19 07:23:40.253747 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-19 07:23:40.253757 | orchestrator | Friday 19 September 2025 07:23:14 +0000 (0:00:12.750) 0:02:41.068 ****** 2025-09-19 07:23:40.253768 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:23:40.253777 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:23:40.253787 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:40.253797 | orchestrator | 2025-09-19 07:23:40.253807 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-19 07:23:40.253816 | orchestrator | Friday 19 September 2025 07:23:25 +0000 (0:00:10.950) 0:02:52.019 ****** 2025-09-19 07:23:40.253826 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:40.253836 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:23:40.253846 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:23:40.253855 | orchestrator | 2025-09-19 07:23:40.253865 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-19 07:23:40.253875 | orchestrator | Friday 19 September 2025 07:23:31 +0000 (0:00:05.841) 0:02:57.860 ****** 2025-09-19 07:23:40.253884 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:23:40.253894 | orchestrator | 2025-09-19 07:23:40.253905 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:23:40.253915 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 07:23:40.253926 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 07:23:40.253937 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 07:23:40.253947 | orchestrator | 2025-09-19 07:23:40.253957 | orchestrator | 2025-09-19 07:23:40.253988 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:23:40.254000 | orchestrator | Friday 19 September 2025 07:23:39 +0000 (0:00:07.788) 0:03:05.649 ****** 2025-09-19 07:23:40.254010 | orchestrator | =============================================================================== 2025-09-19 07:23:40.254054 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.24s 2025-09-19 07:23:40.254065 | orchestrator | designate : Running Designate bootstrap container ---------------------- 18.02s 2025-09-19 07:23:40.254075 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.77s 2025-09-19 07:23:40.254146 | orchestrator | designate : Restart designate-producer container ----------------------- 12.75s 2025-09-19 07:23:40.254156 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.95s 2025-09-19 07:23:40.254166 | orchestrator | designate : Restart designate-central container ------------------------ 10.93s 2025-09-19 07:23:40.254177 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.79s 2025-09-19 07:23:40.254188 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.21s 2025-09-19 07:23:40.254199 | orchestrator | designate : Restart designate-api container ----------------------------- 7.15s 2025-09-19 07:23:40.254210 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.09s 2025-09-19 07:23:40.254220 | orchestrator | designate : Copying over config.json files for services ----------------- 6.64s 2025-09-19 07:23:40.254230 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.45s 2025-09-19 07:23:40.254240 | orchestrator | designate : Restart designate-worker container -------------------------- 5.84s 2025-09-19 07:23:40.254262 | orchestrator | designate : Check designate containers ---------------------------------- 5.16s 2025-09-19 07:23:40.254272 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.20s 2025-09-19 07:23:40.254282 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.99s 2025-09-19 07:23:40.254291 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.82s 2025-09-19 07:23:40.254301 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.51s 2025-09-19 07:23:40.254310 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.43s 2025-09-19 07:23:40.254320 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.37s 2025-09-19 07:23:40.254330 | orchestrator | 2025-09-19 07:23:40 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:23:40.254340 | orchestrator | 2025-09-19 07:23:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:43.279302 | orchestrator | 2025-09-19 07:23:43 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:23:43.281156 | orchestrator | 2025-09-19 07:23:43 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:23:43.281199 | orchestrator | 2025-09-19 07:23:43 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:43.281211 | orchestrator | 2025-09-19 07:23:43 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:23:43.281222 | orchestrator | 2025-09-19 07:23:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:46.315687 | orchestrator | 2025-09-19 07:23:46 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:23:46.317218 | orchestrator | 2025-09-19 07:23:46 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:23:46.319158 | orchestrator | 2025-09-19 07:23:46 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:46.320345 | orchestrator | 2025-09-19 07:23:46 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:23:46.320441 | orchestrator | 2025-09-19 07:23:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:49.369544 | orchestrator | 2025-09-19 07:23:49 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:23:49.369645 | orchestrator | 2025-09-19 07:23:49 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:23:49.370510 | orchestrator | 2025-09-19 07:23:49 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:49.371729 | orchestrator | 2025-09-19 07:23:49 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:23:49.371767 | orchestrator | 2025-09-19 07:23:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:52.413584 | orchestrator | 2025-09-19 07:23:52 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:23:52.416298 | orchestrator | 2025-09-19 07:23:52 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:23:52.417291 | orchestrator | 2025-09-19 07:23:52 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:52.418663 | orchestrator | 2025-09-19 07:23:52 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:23:52.418931 | orchestrator | 2025-09-19 07:23:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:55.472649 | orchestrator | 2025-09-19 07:23:55 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:23:55.474482 | orchestrator | 2025-09-19 07:23:55 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:23:55.476013 | orchestrator | 2025-09-19 07:23:55 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:55.477459 | orchestrator | 2025-09-19 07:23:55 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:23:55.477562 | orchestrator | 2025-09-19 07:23:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:23:58.524226 | orchestrator | 2025-09-19 07:23:58 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:23:58.527265 | orchestrator | 2025-09-19 07:23:58 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:23:58.529609 | orchestrator | 2025-09-19 07:23:58 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:23:58.531368 | orchestrator | 2025-09-19 07:23:58 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:23:58.531740 | orchestrator | 2025-09-19 07:23:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:01.576819 | orchestrator | 2025-09-19 07:24:01 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:01.578346 | orchestrator | 2025-09-19 07:24:01 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:01.580204 | orchestrator | 2025-09-19 07:24:01 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:01.582294 | orchestrator | 2025-09-19 07:24:01 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:01.582370 | orchestrator | 2025-09-19 07:24:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:04.625146 | orchestrator | 2025-09-19 07:24:04 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:04.625252 | orchestrator | 2025-09-19 07:24:04 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:04.625535 | orchestrator | 2025-09-19 07:24:04 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:04.626416 | orchestrator | 2025-09-19 07:24:04 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:04.626577 | orchestrator | 2025-09-19 07:24:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:07.660573 | orchestrator | 2025-09-19 07:24:07 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:07.661476 | orchestrator | 2025-09-19 07:24:07 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:07.662889 | orchestrator | 2025-09-19 07:24:07 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:07.665145 | orchestrator | 2025-09-19 07:24:07 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:07.665256 | orchestrator | 2025-09-19 07:24:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:10.702094 | orchestrator | 2025-09-19 07:24:10 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:10.702713 | orchestrator | 2025-09-19 07:24:10 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:10.703824 | orchestrator | 2025-09-19 07:24:10 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:10.705527 | orchestrator | 2025-09-19 07:24:10 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:10.705605 | orchestrator | 2025-09-19 07:24:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:13.738554 | orchestrator | 2025-09-19 07:24:13 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:13.742261 | orchestrator | 2025-09-19 07:24:13 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:13.746374 | orchestrator | 2025-09-19 07:24:13 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:13.750363 | orchestrator | 2025-09-19 07:24:13 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:13.750813 | orchestrator | 2025-09-19 07:24:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:16.797253 | orchestrator | 2025-09-19 07:24:16 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:16.798758 | orchestrator | 2025-09-19 07:24:16 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:16.800358 | orchestrator | 2025-09-19 07:24:16 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:16.801684 | orchestrator | 2025-09-19 07:24:16 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:16.801711 | orchestrator | 2025-09-19 07:24:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:19.839070 | orchestrator | 2025-09-19 07:24:19 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:19.840580 | orchestrator | 2025-09-19 07:24:19 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:19.843798 | orchestrator | 2025-09-19 07:24:19 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:19.846418 | orchestrator | 2025-09-19 07:24:19 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:19.846455 | orchestrator | 2025-09-19 07:24:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:22.885383 | orchestrator | 2025-09-19 07:24:22 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:22.885971 | orchestrator | 2025-09-19 07:24:22 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:22.887740 | orchestrator | 2025-09-19 07:24:22 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:22.888774 | orchestrator | 2025-09-19 07:24:22 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:22.889074 | orchestrator | 2025-09-19 07:24:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:25.924991 | orchestrator | 2025-09-19 07:24:25 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:25.925091 | orchestrator | 2025-09-19 07:24:25 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:25.925987 | orchestrator | 2025-09-19 07:24:25 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:25.927063 | orchestrator | 2025-09-19 07:24:25 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:25.927089 | orchestrator | 2025-09-19 07:24:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:28.967304 | orchestrator | 2025-09-19 07:24:28 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:28.967784 | orchestrator | 2025-09-19 07:24:28 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:28.969019 | orchestrator | 2025-09-19 07:24:28 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:28.970000 | orchestrator | 2025-09-19 07:24:28 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:28.970071 | orchestrator | 2025-09-19 07:24:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:32.023225 | orchestrator | 2025-09-19 07:24:32 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:32.025251 | orchestrator | 2025-09-19 07:24:32 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:32.028513 | orchestrator | 2025-09-19 07:24:32 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:32.030894 | orchestrator | 2025-09-19 07:24:32 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:32.030920 | orchestrator | 2025-09-19 07:24:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:35.077611 | orchestrator | 2025-09-19 07:24:35 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:35.081806 | orchestrator | 2025-09-19 07:24:35 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:35.085252 | orchestrator | 2025-09-19 07:24:35 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:35.085747 | orchestrator | 2025-09-19 07:24:35 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:35.086303 | orchestrator | 2025-09-19 07:24:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:38.134295 | orchestrator | 2025-09-19 07:24:38 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:38.136145 | orchestrator | 2025-09-19 07:24:38 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:38.138370 | orchestrator | 2025-09-19 07:24:38 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:38.139649 | orchestrator | 2025-09-19 07:24:38 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:38.139900 | orchestrator | 2025-09-19 07:24:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:41.190316 | orchestrator | 2025-09-19 07:24:41 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:41.190804 | orchestrator | 2025-09-19 07:24:41 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:41.191856 | orchestrator | 2025-09-19 07:24:41 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:41.193529 | orchestrator | 2025-09-19 07:24:41 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:41.193554 | orchestrator | 2025-09-19 07:24:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:44.244831 | orchestrator | 2025-09-19 07:24:44 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:44.248127 | orchestrator | 2025-09-19 07:24:44 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:44.249101 | orchestrator | 2025-09-19 07:24:44 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:44.249918 | orchestrator | 2025-09-19 07:24:44 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:44.249956 | orchestrator | 2025-09-19 07:24:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:47.285871 | orchestrator | 2025-09-19 07:24:47 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:47.287732 | orchestrator | 2025-09-19 07:24:47 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:47.289338 | orchestrator | 2025-09-19 07:24:47 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:47.290888 | orchestrator | 2025-09-19 07:24:47 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:47.291003 | orchestrator | 2025-09-19 07:24:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:50.332861 | orchestrator | 2025-09-19 07:24:50 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:50.333590 | orchestrator | 2025-09-19 07:24:50 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:50.334716 | orchestrator | 2025-09-19 07:24:50 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:50.335874 | orchestrator | 2025-09-19 07:24:50 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state STARTED 2025-09-19 07:24:50.335898 | orchestrator | 2025-09-19 07:24:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:53.391980 | orchestrator | 2025-09-19 07:24:53 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:53.395688 | orchestrator | 2025-09-19 07:24:53 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:53.397454 | orchestrator | 2025-09-19 07:24:53 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:53.400537 | orchestrator | 2025-09-19 07:24:53 | INFO  | Task 5bccd1b4-1a90-4c77-84ed-fa917e52599a is in state SUCCESS 2025-09-19 07:24:53.402832 | orchestrator | 2025-09-19 07:24:53.402894 | orchestrator | 2025-09-19 07:24:53.402913 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:24:53.402932 | orchestrator | 2025-09-19 07:24:53.402947 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:24:53.402962 | orchestrator | Friday 19 September 2025 07:23:42 +0000 (0:00:00.200) 0:00:00.200 ****** 2025-09-19 07:24:53.402978 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:24:53.402996 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:24:53.403011 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:24:53.403026 | orchestrator | 2025-09-19 07:24:53.403042 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:24:53.403057 | orchestrator | Friday 19 September 2025 07:23:42 +0000 (0:00:00.421) 0:00:00.621 ****** 2025-09-19 07:24:53.403074 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-19 07:24:53.403090 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-19 07:24:53.403106 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-19 07:24:53.403121 | orchestrator | 2025-09-19 07:24:53.403134 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-19 07:24:53.403150 | orchestrator | 2025-09-19 07:24:53.403167 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 07:24:53.403202 | orchestrator | Friday 19 September 2025 07:23:43 +0000 (0:00:00.746) 0:00:01.367 ****** 2025-09-19 07:24:53.403220 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:24:53.403238 | orchestrator | 2025-09-19 07:24:53.403254 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-19 07:24:53.403270 | orchestrator | Friday 19 September 2025 07:23:43 +0000 (0:00:00.571) 0:00:01.938 ****** 2025-09-19 07:24:53.403284 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-19 07:24:53.403300 | orchestrator | 2025-09-19 07:24:53.403316 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-19 07:24:53.403328 | orchestrator | Friday 19 September 2025 07:23:47 +0000 (0:00:03.925) 0:00:05.864 ****** 2025-09-19 07:24:53.403373 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-19 07:24:53.403393 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-19 07:24:53.403409 | orchestrator | 2025-09-19 07:24:53.403425 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-19 07:24:53.403446 | orchestrator | Friday 19 September 2025 07:23:55 +0000 (0:00:07.642) 0:00:13.506 ****** 2025-09-19 07:24:53.403466 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:24:53.403484 | orchestrator | 2025-09-19 07:24:53.403501 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-19 07:24:53.403515 | orchestrator | Friday 19 September 2025 07:23:59 +0000 (0:00:03.610) 0:00:17.117 ****** 2025-09-19 07:24:53.403534 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:24:53.403553 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-19 07:24:53.403571 | orchestrator | 2025-09-19 07:24:53.403588 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-19 07:24:53.403607 | orchestrator | Friday 19 September 2025 07:24:03 +0000 (0:00:04.276) 0:00:21.393 ****** 2025-09-19 07:24:53.403624 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:24:53.403643 | orchestrator | 2025-09-19 07:24:53.403659 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-19 07:24:53.403676 | orchestrator | Friday 19 September 2025 07:24:06 +0000 (0:00:03.371) 0:00:24.764 ****** 2025-09-19 07:24:53.403694 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-19 07:24:53.403712 | orchestrator | 2025-09-19 07:24:53.403728 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 07:24:53.403745 | orchestrator | Friday 19 September 2025 07:24:11 +0000 (0:00:04.415) 0:00:29.180 ****** 2025-09-19 07:24:53.403761 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:53.403776 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:53.403790 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:53.403804 | orchestrator | 2025-09-19 07:24:53.403817 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-19 07:24:53.403831 | orchestrator | Friday 19 September 2025 07:24:11 +0000 (0:00:00.257) 0:00:29.437 ****** 2025-09-19 07:24:53.403853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.403894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.403927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.403943 | orchestrator | 2025-09-19 07:24:53.403958 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-19 07:24:53.403974 | orchestrator | Friday 19 September 2025 07:24:12 +0000 (0:00:01.048) 0:00:30.486 ****** 2025-09-19 07:24:53.403991 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:53.404007 | orchestrator | 2025-09-19 07:24:53.404022 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-19 07:24:53.404037 | orchestrator | Friday 19 September 2025 07:24:12 +0000 (0:00:00.114) 0:00:30.600 ****** 2025-09-19 07:24:53.404053 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:53.404068 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:53.404084 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:53.404099 | orchestrator | 2025-09-19 07:24:53.404112 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 07:24:53.404128 | orchestrator | Friday 19 September 2025 07:24:13 +0000 (0:00:00.437) 0:00:31.038 ****** 2025-09-19 07:24:53.404145 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:24:53.404161 | orchestrator | 2025-09-19 07:24:53.404176 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-19 07:24:53.404245 | orchestrator | Friday 19 September 2025 07:24:13 +0000 (0:00:00.425) 0:00:31.463 ****** 2025-09-19 07:24:53.404263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.404295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.404323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.404337 | orchestrator | 2025-09-19 07:24:53.404351 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-19 07:24:53.404364 | orchestrator | Friday 19 September 2025 07:24:14 +0000 (0:00:01.530) 0:00:32.993 ****** 2025-09-19 07:24:53.404378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:24:53.404390 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:53.404404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:24:53.404419 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:53.404441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:24:53.404459 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:53.404472 | orchestrator | 2025-09-19 07:24:53.404485 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-19 07:24:53.404500 | orchestrator | Friday 19 September 2025 07:24:15 +0000 (0:00:00.693) 0:00:33.687 ****** 2025-09-19 07:24:53.404514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:24:53.404527 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:53.404541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:24:53.404556 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:53.404570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:24:53.404584 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:53.404597 | orchestrator | 2025-09-19 07:24:53.404611 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-19 07:24:53.404624 | orchestrator | Friday 19 September 2025 07:24:16 +0000 (0:00:00.665) 0:00:34.352 ****** 2025-09-19 07:24:53.404644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.404666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.404682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.404696 | orchestrator | 2025-09-19 07:24:53.404709 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-19 07:24:53.404720 | orchestrator | Friday 19 September 2025 07:24:17 +0000 (0:00:01.521) 0:00:35.873 ****** 2025-09-19 07:24:53.404732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.404746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.404777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.404792 | orchestrator | 2025-09-19 07:24:53.404805 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-19 07:24:53.404819 | orchestrator | Friday 19 September 2025 07:24:20 +0000 (0:00:02.490) 0:00:38.364 ****** 2025-09-19 07:24:53.404832 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 07:24:53.404846 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 07:24:53.404859 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 07:24:53.404871 | orchestrator | 2025-09-19 07:24:53.404884 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-19 07:24:53.404897 | orchestrator | Friday 19 September 2025 07:24:22 +0000 (0:00:01.730) 0:00:40.095 ****** 2025-09-19 07:24:53.404911 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:53.404924 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:24:53.404936 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:24:53.404950 | orchestrator | 2025-09-19 07:24:53.404964 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-19 07:24:53.404977 | orchestrator | Friday 19 September 2025 07:24:23 +0000 (0:00:01.384) 0:00:41.479 ****** 2025-09-19 07:24:53.404989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:24:53.405000 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:24:53.405013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:24:53.405036 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:24:53.405058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 07:24:53.405072 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:24:53.405085 | orchestrator | 2025-09-19 07:24:53.405099 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-19 07:24:53.405112 | orchestrator | Friday 19 September 2025 07:24:23 +0000 (0:00:00.525) 0:00:42.004 ****** 2025-09-19 07:24:53.405126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.405140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.405154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 07:24:53.405176 | orchestrator | 2025-09-19 07:24:53.405247 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-19 07:24:53.405262 | orchestrator | Friday 19 September 2025 07:24:25 +0000 (0:00:01.229) 0:00:43.234 ****** 2025-09-19 07:24:53.405275 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:53.405285 | orchestrator | 2025-09-19 07:24:53.405297 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-19 07:24:53.405310 | orchestrator | Friday 19 September 2025 07:24:27 +0000 (0:00:02.669) 0:00:45.903 ****** 2025-09-19 07:24:53.405324 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:53.405339 | orchestrator | 2025-09-19 07:24:53.405353 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-19 07:24:53.405365 | orchestrator | Friday 19 September 2025 07:24:30 +0000 (0:00:02.377) 0:00:48.281 ****** 2025-09-19 07:24:53.405377 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:53.405391 | orchestrator | 2025-09-19 07:24:53.405405 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 07:24:53.405419 | orchestrator | Friday 19 September 2025 07:24:44 +0000 (0:00:14.385) 0:01:02.666 ****** 2025-09-19 07:24:53.405432 | orchestrator | 2025-09-19 07:24:53.405444 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 07:24:53.405458 | orchestrator | Friday 19 September 2025 07:24:44 +0000 (0:00:00.063) 0:01:02.729 ****** 2025-09-19 07:24:53.405472 | orchestrator | 2025-09-19 07:24:53.405494 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 07:24:53.405507 | orchestrator | Friday 19 September 2025 07:24:44 +0000 (0:00:00.078) 0:01:02.808 ****** 2025-09-19 07:24:53.405521 | orchestrator | 2025-09-19 07:24:53.405536 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-19 07:24:53.405550 | orchestrator | Friday 19 September 2025 07:24:44 +0000 (0:00:00.069) 0:01:02.877 ****** 2025-09-19 07:24:53.405560 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:24:53.405574 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:24:53.405587 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:24:53.405601 | orchestrator | 2025-09-19 07:24:53.405614 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:24:53.405630 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 07:24:53.405645 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:24:53.405658 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:24:53.405671 | orchestrator | 2025-09-19 07:24:53.405685 | orchestrator | 2025-09-19 07:24:53.405698 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:24:53.405712 | orchestrator | Friday 19 September 2025 07:24:52 +0000 (0:00:07.714) 0:01:10.591 ****** 2025-09-19 07:24:53.405724 | orchestrator | =============================================================================== 2025-09-19 07:24:53.405737 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.39s 2025-09-19 07:24:53.405751 | orchestrator | placement : Restart placement-api container ----------------------------- 7.71s 2025-09-19 07:24:53.405776 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.64s 2025-09-19 07:24:53.405789 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.42s 2025-09-19 07:24:53.405802 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.28s 2025-09-19 07:24:53.405816 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.93s 2025-09-19 07:24:53.405828 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.61s 2025-09-19 07:24:53.405839 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.37s 2025-09-19 07:24:53.405852 | orchestrator | placement : Creating placement databases -------------------------------- 2.67s 2025-09-19 07:24:53.405864 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.49s 2025-09-19 07:24:53.405878 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.38s 2025-09-19 07:24:53.405893 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.73s 2025-09-19 07:24:53.405907 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.53s 2025-09-19 07:24:53.405919 | orchestrator | placement : Copying over config.json files for services ----------------- 1.52s 2025-09-19 07:24:53.405931 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.38s 2025-09-19 07:24:53.405945 | orchestrator | placement : Check placement containers ---------------------------------- 1.23s 2025-09-19 07:24:53.405958 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.05s 2025-09-19 07:24:53.405970 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2025-09-19 07:24:53.405984 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.69s 2025-09-19 07:24:53.405996 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.67s 2025-09-19 07:24:53.406009 | orchestrator | 2025-09-19 07:24:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:56.455555 | orchestrator | 2025-09-19 07:24:56 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:56.456878 | orchestrator | 2025-09-19 07:24:56 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:56.458917 | orchestrator | 2025-09-19 07:24:56 | INFO  | Task 7fa04dae-cc88-4078-8a40-8289cf3e4267 is in state STARTED 2025-09-19 07:24:56.460125 | orchestrator | 2025-09-19 07:24:56 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:56.460479 | orchestrator | 2025-09-19 07:24:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:24:59.508740 | orchestrator | 2025-09-19 07:24:59 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:24:59.510373 | orchestrator | 2025-09-19 07:24:59 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:24:59.511908 | orchestrator | 2025-09-19 07:24:59 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:24:59.513215 | orchestrator | 2025-09-19 07:24:59 | INFO  | Task 7fa04dae-cc88-4078-8a40-8289cf3e4267 is in state SUCCESS 2025-09-19 07:24:59.514673 | orchestrator | 2025-09-19 07:24:59 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:24:59.514860 | orchestrator | 2025-09-19 07:24:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:02.559372 | orchestrator | 2025-09-19 07:25:02 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:02.560864 | orchestrator | 2025-09-19 07:25:02 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:02.562910 | orchestrator | 2025-09-19 07:25:02 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:02.565080 | orchestrator | 2025-09-19 07:25:02 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:02.565115 | orchestrator | 2025-09-19 07:25:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:05.614297 | orchestrator | 2025-09-19 07:25:05 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:05.615298 | orchestrator | 2025-09-19 07:25:05 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:05.616846 | orchestrator | 2025-09-19 07:25:05 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:05.618461 | orchestrator | 2025-09-19 07:25:05 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:05.618494 | orchestrator | 2025-09-19 07:25:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:08.662112 | orchestrator | 2025-09-19 07:25:08 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:08.665087 | orchestrator | 2025-09-19 07:25:08 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:08.666909 | orchestrator | 2025-09-19 07:25:08 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:08.668780 | orchestrator | 2025-09-19 07:25:08 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:08.668809 | orchestrator | 2025-09-19 07:25:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:11.706602 | orchestrator | 2025-09-19 07:25:11 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:11.708047 | orchestrator | 2025-09-19 07:25:11 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:11.709610 | orchestrator | 2025-09-19 07:25:11 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:11.711282 | orchestrator | 2025-09-19 07:25:11 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:11.711308 | orchestrator | 2025-09-19 07:25:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:14.753969 | orchestrator | 2025-09-19 07:25:14 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:14.754650 | orchestrator | 2025-09-19 07:25:14 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:14.755413 | orchestrator | 2025-09-19 07:25:14 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:14.756765 | orchestrator | 2025-09-19 07:25:14 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:14.756792 | orchestrator | 2025-09-19 07:25:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:17.807938 | orchestrator | 2025-09-19 07:25:17 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:17.809746 | orchestrator | 2025-09-19 07:25:17 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:17.810969 | orchestrator | 2025-09-19 07:25:17 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:17.812492 | orchestrator | 2025-09-19 07:25:17 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:17.812526 | orchestrator | 2025-09-19 07:25:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:20.852608 | orchestrator | 2025-09-19 07:25:20 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:20.854273 | orchestrator | 2025-09-19 07:25:20 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:20.855798 | orchestrator | 2025-09-19 07:25:20 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:20.857300 | orchestrator | 2025-09-19 07:25:20 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:20.857344 | orchestrator | 2025-09-19 07:25:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:23.905830 | orchestrator | 2025-09-19 07:25:23 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:23.907204 | orchestrator | 2025-09-19 07:25:23 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:23.908945 | orchestrator | 2025-09-19 07:25:23 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:23.912325 | orchestrator | 2025-09-19 07:25:23 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:23.913184 | orchestrator | 2025-09-19 07:25:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:26.956320 | orchestrator | 2025-09-19 07:25:26 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:26.957313 | orchestrator | 2025-09-19 07:25:26 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:26.957881 | orchestrator | 2025-09-19 07:25:26 | INFO  | [1mTask 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:26.958942 | orchestrator | 2025-09-19 07:25:26 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:26.959125 | orchestrator | 2025-09-19 07:25:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:30.007913 | orchestrator | 2025-09-19 07:25:30 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:30.009442 | orchestrator | 2025-09-19 07:25:30 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:30.010944 | orchestrator | 2025-09-19 07:25:30 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:30.012967 | orchestrator | 2025-09-19 07:25:30 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:30.013355 | orchestrator | 2025-09-19 07:25:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:33.055384 | orchestrator | 2025-09-19 07:25:33 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:33.058284 | orchestrator | 2025-09-19 07:25:33 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:33.061963 | orchestrator | 2025-09-19 07:25:33 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:33.065500 | orchestrator | 2025-09-19 07:25:33 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:33.065537 | orchestrator | 2025-09-19 07:25:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:36.103805 | orchestrator | 2025-09-19 07:25:36 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:36.104222 | orchestrator | 2025-09-19 07:25:36 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:36.105853 | orchestrator | 2025-09-19 07:25:36 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:36.107406 | orchestrator | 2025-09-19 07:25:36 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:36.107479 | orchestrator | 2025-09-19 07:25:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:39.139720 | orchestrator | 2025-09-19 07:25:39 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:39.139956 | orchestrator | 2025-09-19 07:25:39 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:39.140871 | orchestrator | 2025-09-19 07:25:39 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:39.142059 | orchestrator | 2025-09-19 07:25:39 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:39.142144 | orchestrator | 2025-09-19 07:25:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:42.165519 | orchestrator | 2025-09-19 07:25:42 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:42.167031 | orchestrator | 2025-09-19 07:25:42 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:42.168485 | orchestrator | 2025-09-19 07:25:42 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:42.169423 | orchestrator | 2025-09-19 07:25:42 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:42.169458 | orchestrator | 2025-09-19 07:25:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:45.210479 | orchestrator | 2025-09-19 07:25:45 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:45.217549 | orchestrator | 2025-09-19 07:25:45 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:45.233495 | orchestrator | 2025-09-19 07:25:45 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:45.235814 | orchestrator | 2025-09-19 07:25:45 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:45.235851 | orchestrator | 2025-09-19 07:25:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:48.274768 | orchestrator | 2025-09-19 07:25:48 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:48.275598 | orchestrator | 2025-09-19 07:25:48 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:48.276944 | orchestrator | 2025-09-19 07:25:48 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:48.278184 | orchestrator | 2025-09-19 07:25:48 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:48.278289 | orchestrator | 2025-09-19 07:25:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:51.311304 | orchestrator | 2025-09-19 07:25:51 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:51.312154 | orchestrator | 2025-09-19 07:25:51 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:51.313422 | orchestrator | 2025-09-19 07:25:51 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:51.315047 | orchestrator | 2025-09-19 07:25:51 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:51.315195 | orchestrator | 2025-09-19 07:25:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:54.357968 | orchestrator | 2025-09-19 07:25:54 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:54.359386 | orchestrator | 2025-09-19 07:25:54 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:54.360646 | orchestrator | 2025-09-19 07:25:54 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state STARTED 2025-09-19 07:25:54.363739 | orchestrator | 2025-09-19 07:25:54 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:54.363767 | orchestrator | 2025-09-19 07:25:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:25:57.402968 | orchestrator | 2025-09-19 07:25:57 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:25:57.404500 | orchestrator | 2025-09-19 07:25:57 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:25:57.406809 | orchestrator | 2025-09-19 07:25:57 | INFO  | Task 9e370f2d-a190-40dc-8f80-4637486e177e is in state SUCCESS 2025-09-19 07:25:57.408737 | orchestrator | 2025-09-19 07:25:57.408778 | orchestrator | 2025-09-19 07:25:57.408790 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:25:57.408802 | orchestrator | 2025-09-19 07:25:57.408812 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:25:57.408824 | orchestrator | Friday 19 September 2025 07:24:56 +0000 (0:00:00.134) 0:00:00.134 ****** 2025-09-19 07:25:57.408835 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:25:57.408847 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:25:57.408857 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:25:57.408868 | orchestrator | 2025-09-19 07:25:57.408879 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:25:57.408890 | orchestrator | Friday 19 September 2025 07:24:57 +0000 (0:00:00.228) 0:00:00.363 ****** 2025-09-19 07:25:57.408900 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-19 07:25:57.408911 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-19 07:25:57.408922 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-19 07:25:57.408932 | orchestrator | 2025-09-19 07:25:57.408943 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-19 07:25:57.408954 | orchestrator | 2025-09-19 07:25:57.408964 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-19 07:25:57.408975 | orchestrator | Friday 19 September 2025 07:24:57 +0000 (0:00:00.483) 0:00:00.846 ****** 2025-09-19 07:25:57.408985 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:25:57.408996 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:25:57.409007 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:25:57.409018 | orchestrator | 2025-09-19 07:25:57.409029 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:25:57.409041 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:25:57.409053 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:25:57.409064 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:25:57.409075 | orchestrator | 2025-09-19 07:25:57.409154 | orchestrator | 2025-09-19 07:25:57.409167 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:25:57.409177 | orchestrator | Friday 19 September 2025 07:24:58 +0000 (0:00:00.669) 0:00:01.515 ****** 2025-09-19 07:25:57.409188 | orchestrator | =============================================================================== 2025-09-19 07:25:57.409199 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.67s 2025-09-19 07:25:57.409209 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2025-09-19 07:25:57.409220 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.23s 2025-09-19 07:25:57.409231 | orchestrator | 2025-09-19 07:25:57.409243 | orchestrator | 2025-09-19 07:25:57.409289 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:25:57.409311 | orchestrator | 2025-09-19 07:25:57.409330 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:25:57.409380 | orchestrator | Friday 19 September 2025 07:23:44 +0000 (0:00:00.356) 0:00:00.357 ****** 2025-09-19 07:25:57.409400 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:25:57.409418 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:25:57.409438 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:25:57.409456 | orchestrator | 2025-09-19 07:25:57.409472 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:25:57.409483 | orchestrator | Friday 19 September 2025 07:23:44 +0000 (0:00:00.353) 0:00:00.710 ****** 2025-09-19 07:25:57.409494 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-19 07:25:57.409505 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-19 07:25:57.409515 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-19 07:25:57.409526 | orchestrator | 2025-09-19 07:25:57.409537 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-19 07:25:57.409548 | orchestrator | 2025-09-19 07:25:57.409558 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 07:25:57.409569 | orchestrator | Friday 19 September 2025 07:23:44 +0000 (0:00:00.377) 0:00:01.088 ****** 2025-09-19 07:25:57.409580 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:25:57.409590 | orchestrator | 2025-09-19 07:25:57.409601 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-19 07:25:57.409612 | orchestrator | Friday 19 September 2025 07:23:45 +0000 (0:00:00.502) 0:00:01.591 ****** 2025-09-19 07:25:57.409623 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-19 07:25:57.409634 | orchestrator | 2025-09-19 07:25:57.409644 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-19 07:25:57.409655 | orchestrator | Friday 19 September 2025 07:23:49 +0000 (0:00:04.009) 0:00:05.600 ****** 2025-09-19 07:25:57.409666 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-19 07:25:57.409677 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-19 07:25:57.409687 | orchestrator | 2025-09-19 07:25:57.409698 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-19 07:25:57.409709 | orchestrator | Friday 19 September 2025 07:23:57 +0000 (0:00:07.855) 0:00:13.455 ****** 2025-09-19 07:25:57.409719 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:25:57.409730 | orchestrator | 2025-09-19 07:25:57.409741 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-19 07:25:57.409751 | orchestrator | Friday 19 September 2025 07:24:00 +0000 (0:00:03.482) 0:00:16.938 ****** 2025-09-19 07:25:57.409779 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:25:57.409791 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-19 07:25:57.409802 | orchestrator | 2025-09-19 07:25:57.409813 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-19 07:25:57.409828 | orchestrator | Friday 19 September 2025 07:24:04 +0000 (0:00:04.010) 0:00:20.949 ****** 2025-09-19 07:25:57.409847 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:25:57.409865 | orchestrator | 2025-09-19 07:25:57.409882 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-19 07:25:57.409900 | orchestrator | Friday 19 September 2025 07:24:08 +0000 (0:00:03.461) 0:00:24.410 ****** 2025-09-19 07:25:57.409918 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-19 07:25:57.409935 | orchestrator | 2025-09-19 07:25:57.409952 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-19 07:25:57.409970 | orchestrator | Friday 19 September 2025 07:24:12 +0000 (0:00:04.693) 0:00:29.103 ****** 2025-09-19 07:25:57.409987 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:25:57.410005 | orchestrator | 2025-09-19 07:25:57.410090 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-19 07:25:57.410102 | orchestrator | Friday 19 September 2025 07:24:16 +0000 (0:00:03.575) 0:00:32.679 ****** 2025-09-19 07:25:57.410113 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:25:57.410124 | orchestrator | 2025-09-19 07:25:57.410189 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-19 07:25:57.410202 | orchestrator | Friday 19 September 2025 07:24:20 +0000 (0:00:04.243) 0:00:36.922 ****** 2025-09-19 07:25:57.410213 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:25:57.410223 | orchestrator | 2025-09-19 07:25:57.410234 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-19 07:25:57.410244 | orchestrator | Friday 19 September 2025 07:24:24 +0000 (0:00:03.872) 0:00:40.795 ****** 2025-09-19 07:25:57.410322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.410346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.410359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.410384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.410408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.410420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.410432 | orchestrator | 2025-09-19 07:25:57.410443 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-19 07:25:57.410454 | orchestrator | Friday 19 September 2025 07:24:26 +0000 (0:00:01.392) 0:00:42.187 ****** 2025-09-19 07:25:57.410467 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:25:57.410487 | orchestrator | 2025-09-19 07:25:57.410506 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-19 07:25:57.410517 | orchestrator | Friday 19 September 2025 07:24:26 +0000 (0:00:00.142) 0:00:42.330 ****** 2025-09-19 07:25:57.410528 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:25:57.410539 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:25:57.410549 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:25:57.410560 | orchestrator | 2025-09-19 07:25:57.410570 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-19 07:25:57.410581 | orchestrator | Friday 19 September 2025 07:24:26 +0000 (0:00:00.491) 0:00:42.821 ****** 2025-09-19 07:25:57.410592 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:25:57.410602 | orchestrator | 2025-09-19 07:25:57.410613 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-19 07:25:57.410624 | orchestrator | Friday 19 September 2025 07:24:27 +0000 (0:00:00.852) 0:00:43.673 ****** 2025-09-19 07:25:57.410635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.410664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.410677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.410688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.410700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.410711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.410729 | orchestrator | 2025-09-19 07:25:57.410740 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-19 07:25:57.410751 | orchestrator | Friday 19 September 2025 07:24:30 +0000 (0:00:02.487) 0:00:46.161 ****** 2025-09-19 07:25:57.410765 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:25:57.410785 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:25:57.410802 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:25:57.410814 | orchestrator | 2025-09-19 07:25:57.410825 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 07:25:57.410842 | orchestrator | Friday 19 September 2025 07:24:30 +0000 (0:00:00.318) 0:00:46.480 ****** 2025-09-19 07:25:57.410853 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:25:57.410864 | orchestrator | 2025-09-19 07:25:57.410875 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-19 07:25:57.410886 | orchestrator | Friday 19 September 2025 07:24:31 +0000 (0:00:00.729) 0:00:47.210 ****** 2025-09-19 07:25:57.410897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.410909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.410920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.410932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.410958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.410970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.410981 | orchestrator | 2025-09-19 07:25:57.410993 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-19 07:25:57.411004 | orchestrator | Friday 19 September 2025 07:24:33 +0000 (0:00:02.344) 0:00:49.554 ****** 2025-09-19 07:25:57.411015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:25:57.411026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:25:57.411038 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:25:57.411057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:25:57.411076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:25:57.411112 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:25:57.411125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:25:57.411136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:25:57.411148 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:25:57.411159 | orchestrator | 2025-09-19 07:25:57.411170 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-19 07:25:57.411181 | orchestrator | Friday 19 September 2025 07:24:34 +0000 (0:00:00.651) 0:00:50.205 ****** 2025-09-19 07:25:57.411192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:25:57.411210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:25:57.411222 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:25:57.411240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:25:57.411253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:25:57.411420 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:25:57.411438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:25:57.411462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:25:57.411473 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:25:57.411484 | orchestrator | 2025-09-19 07:25:57.411495 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-19 07:25:57.411506 | orchestrator | Friday 19 September 2025 07:24:35 +0000 (0:00:01.076) 0:00:51.282 ****** 2025-09-19 07:25:57.411531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.411543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.411555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.411566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.411584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.411603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.411614 | orchestrator | 2025-09-19 07:25:57.411625 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-19 07:25:57.411636 | orchestrator | Friday 19 September 2025 07:24:37 +0000 (0:00:02.802) 0:00:54.084 ****** 2025-09-19 07:25:57.411647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.411659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.411677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.411688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.411709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.411721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.411732 | orchestrator | 2025-09-19 07:25:57.411743 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-19 07:25:57.411754 | orchestrator | Friday 19 September 2025 07:24:43 +0000 (0:00:05.090) 0:00:59.174 ****** 2025-09-19 07:25:57.411766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:25:57.411784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:25:57.411795 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:25:57.411806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:25:57.411825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:25:57.411836 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:25:57.411847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 07:25:57.411859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:25:57.411877 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:25:57.411888 | orchestrator | 2025-09-19 07:25:57.411899 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-19 07:25:57.411910 | orchestrator | Friday 19 September 2025 07:24:43 +0000 (0:00:00.627) 0:00:59.802 ****** 2025-09-19 07:25:57.411921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.411939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.411950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 07:25:57.411962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.411979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.411991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:25:57.412002 | orchestrator | 2025-09-19 07:25:57.412013 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 07:25:57.412023 | orchestrator | Friday 19 September 2025 07:24:46 +0000 (0:00:02.598) 0:01:02.401 ****** 2025-09-19 07:25:57.412034 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:25:57.412045 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:25:57.412056 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:25:57.412066 | orchestrator | 2025-09-19 07:25:57.412077 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-19 07:25:57.412088 | orchestrator | Friday 19 September 2025 07:24:46 +0000 (0:00:00.426) 0:01:02.827 ****** 2025-09-19 07:25:57.412099 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:25:57.412110 | orchestrator | 2025-09-19 07:25:57.412120 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-19 07:25:57.412131 | orchestrator | Friday 19 September 2025 07:24:49 +0000 (0:00:02.333) 0:01:05.160 ****** 2025-09-19 07:25:57.412142 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:25:57.412153 | orchestrator | 2025-09-19 07:25:57.412163 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-19 07:25:57.412174 | orchestrator | Friday 19 September 2025 07:24:51 +0000 (0:00:02.404) 0:01:07.565 ****** 2025-09-19 07:25:57.412191 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:25:57.412202 | orchestrator | 2025-09-19 07:25:57.412213 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 07:25:57.412224 | orchestrator | Friday 19 September 2025 07:25:24 +0000 (0:00:33.172) 0:01:40.737 ****** 2025-09-19 07:25:57.412235 | orchestrator | 2025-09-19 07:25:57.412246 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 07:25:57.412293 | orchestrator | Friday 19 September 2025 07:25:24 +0000 (0:00:00.075) 0:01:40.812 ****** 2025-09-19 07:25:57.412306 | orchestrator | 2025-09-19 07:25:57.412317 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 07:25:57.412328 | orchestrator | Friday 19 September 2025 07:25:24 +0000 (0:00:00.068) 0:01:40.881 ****** 2025-09-19 07:25:57.412338 | orchestrator | 2025-09-19 07:25:57.412361 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-19 07:25:57.412372 | orchestrator | Friday 19 September 2025 07:25:24 +0000 (0:00:00.074) 0:01:40.956 ****** 2025-09-19 07:25:57.412383 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:25:57.412394 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:25:57.412404 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:25:57.412415 | orchestrator | 2025-09-19 07:25:57.412426 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-19 07:25:57.412437 | orchestrator | Friday 19 September 2025 07:25:39 +0000 (0:00:14.639) 0:01:55.595 ****** 2025-09-19 07:25:57.412448 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:25:57.412459 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:25:57.412469 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:25:57.412480 | orchestrator | 2025-09-19 07:25:57.412490 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:25:57.412502 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 07:25:57.412514 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:25:57.412525 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:25:57.412536 | orchestrator | 2025-09-19 07:25:57.412546 | orchestrator | 2025-09-19 07:25:57.412557 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:25:57.412568 | orchestrator | Friday 19 September 2025 07:25:53 +0000 (0:00:14.515) 0:02:10.111 ****** 2025-09-19 07:25:57.412579 | orchestrator | =============================================================================== 2025-09-19 07:25:57.412589 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 33.17s 2025-09-19 07:25:57.412600 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.64s 2025-09-19 07:25:57.412611 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.52s 2025-09-19 07:25:57.412622 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.86s 2025-09-19 07:25:57.412633 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.09s 2025-09-19 07:25:57.412644 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.69s 2025-09-19 07:25:57.412655 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.24s 2025-09-19 07:25:57.412666 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.01s 2025-09-19 07:25:57.412676 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.01s 2025-09-19 07:25:57.412687 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.87s 2025-09-19 07:25:57.412698 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.58s 2025-09-19 07:25:57.412709 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.48s 2025-09-19 07:25:57.412719 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.46s 2025-09-19 07:25:57.412730 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.80s 2025-09-19 07:25:57.412741 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.60s 2025-09-19 07:25:57.412752 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.49s 2025-09-19 07:25:57.412762 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.40s 2025-09-19 07:25:57.412773 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.34s 2025-09-19 07:25:57.412784 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.33s 2025-09-19 07:25:57.412795 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.39s 2025-09-19 07:25:57.412817 | orchestrator | 2025-09-19 07:25:57 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:25:57.412828 | orchestrator | 2025-09-19 07:25:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:00.453884 | orchestrator | 2025-09-19 07:26:00 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:00.455173 | orchestrator | 2025-09-19 07:26:00 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:26:00.456608 | orchestrator | 2025-09-19 07:26:00 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:26:00.456642 | orchestrator | 2025-09-19 07:26:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:03.505787 | orchestrator | 2025-09-19 07:26:03 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:03.507852 | orchestrator | 2025-09-19 07:26:03 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:26:03.510091 | orchestrator | 2025-09-19 07:26:03 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:26:03.510205 | orchestrator | 2025-09-19 07:26:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:06.564302 | orchestrator | 2025-09-19 07:26:06 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:06.565509 | orchestrator | 2025-09-19 07:26:06 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:26:06.567439 | orchestrator | 2025-09-19 07:26:06 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:26:06.567729 | orchestrator | 2025-09-19 07:26:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:09.621020 | orchestrator | 2025-09-19 07:26:09 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:09.622746 | orchestrator | 2025-09-19 07:26:09 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:26:09.624159 | orchestrator | 2025-09-19 07:26:09 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:26:09.624189 | orchestrator | 2025-09-19 07:26:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:12.669743 | orchestrator | 2025-09-19 07:26:12 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:12.672456 | orchestrator | 2025-09-19 07:26:12 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state STARTED 2025-09-19 07:26:12.674590 | orchestrator | 2025-09-19 07:26:12 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:26:12.674775 | orchestrator | 2025-09-19 07:26:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:15.721845 | orchestrator | 2025-09-19 07:26:15 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:15.724627 | orchestrator | 2025-09-19 07:26:15 | INFO  | Task b4db20cf-1a6a-4f16-8b2e-45e883fe0356 is in state SUCCESS 2025-09-19 07:26:15.726749 | orchestrator | 2025-09-19 07:26:15.726791 | orchestrator | 2025-09-19 07:26:15.726803 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:26:15.726815 | orchestrator | 2025-09-19 07:26:15.726826 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:26:15.726838 | orchestrator | Friday 19 September 2025 07:23:45 +0000 (0:00:00.325) 0:00:00.325 ****** 2025-09-19 07:26:15.726849 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:15.726861 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:26:15.726872 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:26:15.726909 | orchestrator | 2025-09-19 07:26:15.726921 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:26:15.726932 | orchestrator | Friday 19 September 2025 07:23:46 +0000 (0:00:00.353) 0:00:00.678 ****** 2025-09-19 07:26:15.726943 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-19 07:26:15.726954 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-19 07:26:15.726965 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-19 07:26:15.726975 | orchestrator | 2025-09-19 07:26:15.726986 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-19 07:26:15.726997 | orchestrator | 2025-09-19 07:26:15.727043 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-19 07:26:15.727055 | orchestrator | Friday 19 September 2025 07:23:46 +0000 (0:00:00.433) 0:00:01.111 ****** 2025-09-19 07:26:15.727066 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:26:15.727078 | orchestrator | 2025-09-19 07:26:15.727088 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-19 07:26:15.727099 | orchestrator | Friday 19 September 2025 07:23:47 +0000 (0:00:00.547) 0:00:01.658 ****** 2025-09-19 07:26:15.727114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.727130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.727141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.727153 | orchestrator | 2025-09-19 07:26:15.727163 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-19 07:26:15.727200 | orchestrator | Friday 19 September 2025 07:23:47 +0000 (0:00:00.818) 0:00:02.477 ****** 2025-09-19 07:26:15.727214 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-19 07:26:15.727226 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-19 07:26:15.727236 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:26:15.727247 | orchestrator | 2025-09-19 07:26:15.727258 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-19 07:26:15.727269 | orchestrator | Friday 19 September 2025 07:23:48 +0000 (0:00:00.834) 0:00:03.311 ****** 2025-09-19 07:26:15.727313 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:26:15.727325 | orchestrator | 2025-09-19 07:26:15.727338 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-19 07:26:15.727351 | orchestrator | Friday 19 September 2025 07:23:49 +0000 (0:00:00.705) 0:00:04.016 ****** 2025-09-19 07:26:15.727409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.727424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.727438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.727450 | orchestrator | 2025-09-19 07:26:15.727463 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-19 07:26:15.727476 | orchestrator | Friday 19 September 2025 07:23:50 +0000 (0:00:01.411) 0:00:05.428 ****** 2025-09-19 07:26:15.727488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:26:15.727501 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:15.727512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:26:15.727615 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:15.727666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:26:15.727680 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:15.727691 | orchestrator | 2025-09-19 07:26:15.727702 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-19 07:26:15.727712 | orchestrator | Friday 19 September 2025 07:23:51 +0000 (0:00:00.381) 0:00:05.810 ****** 2025-09-19 07:26:15.727723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:26:15.727735 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:15.727746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:26:15.727757 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:15.727796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 07:26:15.727808 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:15.727818 | orchestrator | 2025-09-19 07:26:15.727829 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-19 07:26:15.727840 | orchestrator | Friday 19 September 2025 07:23:52 +0000 (0:00:01.044) 0:00:06.855 ****** 2025-09-19 07:26:15.727851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.727870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.727891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.727903 | orchestrator | 2025-09-19 07:26:15.727913 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-19 07:26:15.727924 | orchestrator | Friday 19 September 2025 07:23:53 +0000 (0:00:01.440) 0:00:08.295 ****** 2025-09-19 07:26:15.727935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.727947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.727959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.727977 | orchestrator | 2025-09-19 07:26:15.727988 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-19 07:26:15.727999 | orchestrator | Friday 19 September 2025 07:23:55 +0000 (0:00:01.471) 0:00:09.767 ****** 2025-09-19 07:26:15.728009 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:15.728020 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:15.728031 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:15.728042 | orchestrator | 2025-09-19 07:26:15.728053 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-19 07:26:15.728064 | orchestrator | Friday 19 September 2025 07:23:55 +0000 (0:00:00.517) 0:00:10.285 ****** 2025-09-19 07:26:15.728074 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 07:26:15.728086 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 07:26:15.728096 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 07:26:15.728107 | orchestrator | 2025-09-19 07:26:15.728118 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-19 07:26:15.728129 | orchestrator | Friday 19 September 2025 07:23:57 +0000 (0:00:01.381) 0:00:11.667 ****** 2025-09-19 07:26:15.728139 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 07:26:15.728150 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 07:26:15.728161 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 07:26:15.728172 | orchestrator | 2025-09-19 07:26:15.728182 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-19 07:26:15.728193 | orchestrator | Friday 19 September 2025 07:23:58 +0000 (0:00:01.297) 0:00:12.965 ****** 2025-09-19 07:26:15.728209 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:26:15.728220 | orchestrator | 2025-09-19 07:26:15.728231 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-19 07:26:15.728242 | orchestrator | Friday 19 September 2025 07:23:59 +0000 (0:00:00.787) 0:00:13.752 ****** 2025-09-19 07:26:15.728253 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-19 07:26:15.728264 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-19 07:26:15.728292 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:15.728303 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:26:15.728314 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:26:15.728324 | orchestrator | 2025-09-19 07:26:15.728335 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-19 07:26:15.728346 | orchestrator | Friday 19 September 2025 07:23:59 +0000 (0:00:00.726) 0:00:14.479 ****** 2025-09-19 07:26:15.728357 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:15.728368 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:15.728378 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:15.728389 | orchestrator | 2025-09-19 07:26:15.728400 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-19 07:26:15.728440 | orchestrator | Friday 19 September 2025 07:24:00 +0000 (0:00:00.634) 0:00:15.114 ****** 2025-09-19 07:26:15.728452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1109469, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1033623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1109469, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1033623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1109469, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1033623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1109565, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1150658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1109565, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1150658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1109565, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1150658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1109487, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.106014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1109487, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.106014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1109487, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.106014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1109571, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.116812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1109571, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.116812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1109571, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.116812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1109511, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1087642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1109511, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1087642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1109511, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1087642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1109544, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.113125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1109544, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.113125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1109544, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.113125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1109467, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.102127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1109467, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.102127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1109467, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.102127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1109476, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.103632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1109476, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.103632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1109476, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.103632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1109490, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.106014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1109490, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.106014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1109490, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.106014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1109527, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1103542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1109527, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1103542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1109527, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1103542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1109556, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1146445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.728981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1109556, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1146445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1109556, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1146445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1109480, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1051953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1109480, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1051953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1109480, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1051953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1109538, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1117864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1109538, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1117864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1109538, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1117864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1109518, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1100621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1109518, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1100621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1109518, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1100621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1109503, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1086795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1109503, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1086795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1109503, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1086795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1109497, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.106764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1109497, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.106764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1109497, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.106764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1109530, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1114533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1109530, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1114533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1109530, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1114533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1109492, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.106737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1109492, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.106737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1109549, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1134427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1109492, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.106737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1109549, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1134427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1109785, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2009497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1109549, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1134427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1109785, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2009497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1109699, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1557648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1109785, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.2009497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1109699, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1557648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1109690, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1493433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1109699, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1557648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1109690, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1493433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1109715, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1587715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1109690, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1493433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1109715, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1587715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1109619, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1470892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1109715, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1587715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1109619, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1470892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1109746, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1904392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1109619, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1470892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1109746, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1904392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1109718, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1807654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1109746, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1904392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1109718, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1807654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1109752, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1910253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1109718, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1807654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1109752, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1910253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1109779, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1990047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1109752, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1910253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1109779, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1990047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1109740, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1857653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1109779, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1990047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1109740, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1857653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1109709, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.15768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1109740, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1857653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1109709, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.15768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1109696, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1529744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1109696, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1529744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1109709, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.15768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1109707, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1572726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1109707, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1572726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1109696, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1529744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1109693, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1507647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1109707, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1572726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1109693, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1507647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.729999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1109712, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1579194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1109712, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1579194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1109693, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1507647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1109768, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1964598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1109768, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1964598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1109712, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1579194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1109762, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1945329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1109762, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1945329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1109768, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1964598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1109680, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.148509, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1109680, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.148509, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1109762, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1945329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1109688, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1493433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1109688, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1493433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1109680, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.148509, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1109730, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1847653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1109730, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1847653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1109688, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1493433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1109755, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1923187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1109755, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1923187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1109730, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1847653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1109755, 'dev': 104, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758263636.1923187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 07:26:15.730525 | orchestrator | 2025-09-19 07:26:15.730537 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-19 07:26:15.730547 | orchestrator | Friday 19 September 2025 07:24:40 +0000 (0:00:39.670) 0:00:54.784 ****** 2025-09-19 07:26:15.730559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.730576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.730588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 07:26:15.730598 | orchestrator | 2025-09-19 07:26:15.730607 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-19 07:26:15.730617 | orchestrator | Friday 19 September 2025 07:24:41 +0000 (0:00:01.060) 0:00:55.845 ****** 2025-09-19 07:26:15.730626 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:15.730636 | orchestrator | 2025-09-19 07:26:15.730646 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-19 07:26:15.730655 | orchestrator | Friday 19 September 2025 07:24:43 +0000 (0:00:02.370) 0:00:58.215 ****** 2025-09-19 07:26:15.730665 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:15.730674 | orchestrator | 2025-09-19 07:26:15.730684 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 07:26:15.730693 | orchestrator | Friday 19 September 2025 07:24:46 +0000 (0:00:02.437) 0:01:00.653 ****** 2025-09-19 07:26:15.730702 | orchestrator | 2025-09-19 07:26:15.730712 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 07:26:15.730731 | orchestrator | Friday 19 September 2025 07:24:46 +0000 (0:00:00.151) 0:01:00.805 ****** 2025-09-19 07:26:15.730741 | orchestrator | 2025-09-19 07:26:15.730750 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 07:26:15.730760 | orchestrator | Friday 19 September 2025 07:24:46 +0000 (0:00:00.102) 0:01:00.908 ****** 2025-09-19 07:26:15.730769 | orchestrator | 2025-09-19 07:26:15.730779 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-19 07:26:15.730788 | orchestrator | Friday 19 September 2025 07:24:46 +0000 (0:00:00.381) 0:01:01.289 ****** 2025-09-19 07:26:15.730798 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:15.730807 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:15.730816 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:15.730826 | orchestrator | 2025-09-19 07:26:15.730835 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-19 07:26:15.730845 | orchestrator | Friday 19 September 2025 07:24:48 +0000 (0:00:02.114) 0:01:03.403 ****** 2025-09-19 07:26:15.730854 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:15.730863 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:15.730873 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-19 07:26:15.730883 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-19 07:26:15.730892 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-19 07:26:15.730908 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2025-09-19 07:26:15.730917 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:15.730927 | orchestrator | 2025-09-19 07:26:15.730937 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-19 07:26:15.730946 | orchestrator | Friday 19 September 2025 07:25:40 +0000 (0:00:51.719) 0:01:55.123 ****** 2025-09-19 07:26:15.730956 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:15.730965 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:15.730975 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:15.730984 | orchestrator | 2025-09-19 07:26:15.730994 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-19 07:26:15.731003 | orchestrator | Friday 19 September 2025 07:26:09 +0000 (0:00:28.734) 0:02:23.857 ****** 2025-09-19 07:26:15.731013 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:15.731022 | orchestrator | 2025-09-19 07:26:15.731032 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-19 07:26:15.731041 | orchestrator | Friday 19 September 2025 07:26:11 +0000 (0:00:02.304) 0:02:26.162 ****** 2025-09-19 07:26:15.731051 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:15.731060 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:15.731069 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:15.731079 | orchestrator | 2025-09-19 07:26:15.731089 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-19 07:26:15.731098 | orchestrator | Friday 19 September 2025 07:26:12 +0000 (0:00:00.578) 0:02:26.740 ****** 2025-09-19 07:26:15.731110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-19 07:26:15.731122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-19 07:26:15.731132 | orchestrator | 2025-09-19 07:26:15.731141 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-19 07:26:15.731151 | orchestrator | Friday 19 September 2025 07:26:14 +0000 (0:00:02.635) 0:02:29.376 ****** 2025-09-19 07:26:15.731161 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:15.731170 | orchestrator | 2025-09-19 07:26:15.731179 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:26:15.731190 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 07:26:15.731201 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 07:26:15.731210 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 07:26:15.731220 | orchestrator | 2025-09-19 07:26:15.731229 | orchestrator | 2025-09-19 07:26:15.731239 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:26:15.731248 | orchestrator | Friday 19 September 2025 07:26:15 +0000 (0:00:00.276) 0:02:29.652 ****** 2025-09-19 07:26:15.731258 | orchestrator | =============================================================================== 2025-09-19 07:26:15.731267 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.72s 2025-09-19 07:26:15.731294 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.67s 2025-09-19 07:26:15.731304 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 28.73s 2025-09-19 07:26:15.731322 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.64s 2025-09-19 07:26:15.731341 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.44s 2025-09-19 07:26:15.731352 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.37s 2025-09-19 07:26:15.731361 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.30s 2025-09-19 07:26:15.731371 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.11s 2025-09-19 07:26:15.731380 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.47s 2025-09-19 07:26:15.731390 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.44s 2025-09-19 07:26:15.731399 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.41s 2025-09-19 07:26:15.731408 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.38s 2025-09-19 07:26:15.731418 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.30s 2025-09-19 07:26:15.731427 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.06s 2025-09-19 07:26:15.731437 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.04s 2025-09-19 07:26:15.731446 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.83s 2025-09-19 07:26:15.731455 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.82s 2025-09-19 07:26:15.731465 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.79s 2025-09-19 07:26:15.731474 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2025-09-19 07:26:15.731484 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.71s 2025-09-19 07:26:15.731493 | orchestrator | 2025-09-19 07:26:15 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:26:15.731503 | orchestrator | 2025-09-19 07:26:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:18.778887 | orchestrator | 2025-09-19 07:26:18 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:18.779312 | orchestrator | 2025-09-19 07:26:18 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:26:18.779337 | orchestrator | 2025-09-19 07:26:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:21.826334 | orchestrator | 2025-09-19 07:26:21 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:21.827210 | orchestrator | 2025-09-19 07:26:21 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:26:21.827243 | orchestrator | 2025-09-19 07:26:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:24.869055 | orchestrator | 2025-09-19 07:26:24 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:24.870664 | orchestrator | 2025-09-19 07:26:24 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state STARTED 2025-09-19 07:26:24.871026 | orchestrator | 2025-09-19 07:26:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:27.917926 | orchestrator | 2025-09-19 07:26:27 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:27.923701 | orchestrator | 2025-09-19 07:26:27 | INFO  | Task 62e6836e-ec1e-4af1-ace0-b9d891af6f16 is in state SUCCESS 2025-09-19 07:26:27.926500 | orchestrator | 2025-09-19 07:26:27.926685 | orchestrator | 2025-09-19 07:26:27.926705 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:26:27.926718 | orchestrator | 2025-09-19 07:26:27.926730 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-19 07:26:27.926741 | orchestrator | Friday 19 September 2025 07:17:10 +0000 (0:00:00.252) 0:00:00.252 ****** 2025-09-19 07:26:27.926779 | orchestrator | changed: [testbed-manager] 2025-09-19 07:26:27.926793 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.926804 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:27.926833 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:27.926855 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:26:27.926867 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:26:27.926880 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:26:27.926904 | orchestrator | 2025-09-19 07:26:27.926917 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:26:27.926974 | orchestrator | Friday 19 September 2025 07:17:11 +0000 (0:00:00.713) 0:00:00.966 ****** 2025-09-19 07:26:27.926988 | orchestrator | changed: [testbed-manager] 2025-09-19 07:26:27.927000 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.927012 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:27.927025 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:27.927037 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:26:27.927050 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:26:27.927062 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:26:27.927074 | orchestrator | 2025-09-19 07:26:27.927115 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:26:27.927137 | orchestrator | Friday 19 September 2025 07:17:12 +0000 (0:00:00.685) 0:00:01.652 ****** 2025-09-19 07:26:27.927150 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-19 07:26:27.927163 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-19 07:26:27.927176 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-19 07:26:27.927188 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-19 07:26:27.927256 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-19 07:26:27.927270 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-19 07:26:27.927281 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-19 07:26:27.927318 | orchestrator | 2025-09-19 07:26:27.927330 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-19 07:26:27.927389 | orchestrator | 2025-09-19 07:26:27.927401 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-19 07:26:27.927411 | orchestrator | Friday 19 September 2025 07:17:14 +0000 (0:00:01.936) 0:00:03.588 ****** 2025-09-19 07:26:27.927422 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:26:27.927433 | orchestrator | 2025-09-19 07:26:27.927444 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-19 07:26:27.927480 | orchestrator | Friday 19 September 2025 07:17:16 +0000 (0:00:02.094) 0:00:05.683 ****** 2025-09-19 07:26:27.927493 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-19 07:26:27.927504 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-19 07:26:27.927515 | orchestrator | 2025-09-19 07:26:27.927526 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-19 07:26:27.927537 | orchestrator | Friday 19 September 2025 07:17:20 +0000 (0:00:03.729) 0:00:09.413 ****** 2025-09-19 07:26:27.927548 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:26:27.927559 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 07:26:27.927569 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.927580 | orchestrator | 2025-09-19 07:26:27.927610 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-19 07:26:27.927622 | orchestrator | Friday 19 September 2025 07:17:23 +0000 (0:00:03.493) 0:00:12.906 ****** 2025-09-19 07:26:27.927633 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.927644 | orchestrator | 2025-09-19 07:26:27.927655 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-19 07:26:27.927666 | orchestrator | Friday 19 September 2025 07:17:24 +0000 (0:00:00.536) 0:00:13.442 ****** 2025-09-19 07:26:27.927687 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.927698 | orchestrator | 2025-09-19 07:26:27.927709 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-19 07:26:27.927737 | orchestrator | Friday 19 September 2025 07:17:25 +0000 (0:00:01.488) 0:00:14.931 ****** 2025-09-19 07:26:27.927761 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.927772 | orchestrator | 2025-09-19 07:26:27.927783 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 07:26:27.927793 | orchestrator | Friday 19 September 2025 07:17:28 +0000 (0:00:03.200) 0:00:18.132 ****** 2025-09-19 07:26:27.927804 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.927815 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.927826 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.927837 | orchestrator | 2025-09-19 07:26:27.927847 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-19 07:26:27.927858 | orchestrator | Friday 19 September 2025 07:17:29 +0000 (0:00:00.339) 0:00:18.472 ****** 2025-09-19 07:26:27.927869 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:27.927881 | orchestrator | 2025-09-19 07:26:27.927892 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-19 07:26:27.927903 | orchestrator | Friday 19 September 2025 07:17:59 +0000 (0:00:30.726) 0:00:49.199 ****** 2025-09-19 07:26:27.927914 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.927924 | orchestrator | 2025-09-19 07:26:27.927936 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 07:26:27.927947 | orchestrator | Friday 19 September 2025 07:18:13 +0000 (0:00:14.066) 0:01:03.266 ****** 2025-09-19 07:26:27.927984 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:27.927996 | orchestrator | 2025-09-19 07:26:27.928007 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 07:26:27.928018 | orchestrator | Friday 19 September 2025 07:18:26 +0000 (0:00:12.166) 0:01:15.432 ****** 2025-09-19 07:26:27.928044 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:27.928056 | orchestrator | 2025-09-19 07:26:27.928067 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-19 07:26:27.928077 | orchestrator | Friday 19 September 2025 07:18:26 +0000 (0:00:00.905) 0:01:16.337 ****** 2025-09-19 07:26:27.928088 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.928099 | orchestrator | 2025-09-19 07:26:27.928109 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 07:26:27.928120 | orchestrator | Friday 19 September 2025 07:18:27 +0000 (0:00:00.390) 0:01:16.727 ****** 2025-09-19 07:26:27.928131 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:26:27.928142 | orchestrator | 2025-09-19 07:26:27.928153 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-19 07:26:27.928163 | orchestrator | Friday 19 September 2025 07:18:27 +0000 (0:00:00.428) 0:01:17.156 ****** 2025-09-19 07:26:27.928174 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:27.928184 | orchestrator | 2025-09-19 07:26:27.928195 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-19 07:26:27.928206 | orchestrator | Friday 19 September 2025 07:18:46 +0000 (0:00:18.887) 0:01:36.043 ****** 2025-09-19 07:26:27.928216 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.928227 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.928238 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.928248 | orchestrator | 2025-09-19 07:26:27.928259 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-19 07:26:27.928270 | orchestrator | 2025-09-19 07:26:27.928280 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-19 07:26:27.928311 | orchestrator | Friday 19 September 2025 07:18:46 +0000 (0:00:00.346) 0:01:36.390 ****** 2025-09-19 07:26:27.928408 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:26:27.928429 | orchestrator | 2025-09-19 07:26:27.928440 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-19 07:26:27.928458 | orchestrator | Friday 19 September 2025 07:18:47 +0000 (0:00:00.628) 0:01:37.019 ****** 2025-09-19 07:26:27.928469 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.928480 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.928491 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.928501 | orchestrator | 2025-09-19 07:26:27.928512 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-19 07:26:27.928523 | orchestrator | Friday 19 September 2025 07:18:49 +0000 (0:00:02.324) 0:01:39.344 ****** 2025-09-19 07:26:27.928534 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.928544 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.928555 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.928566 | orchestrator | 2025-09-19 07:26:27.928576 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-19 07:26:27.928587 | orchestrator | Friday 19 September 2025 07:18:52 +0000 (0:00:02.456) 0:01:41.800 ****** 2025-09-19 07:26:27.928598 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.928608 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.928619 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.928629 | orchestrator | 2025-09-19 07:26:27.928640 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-19 07:26:27.928651 | orchestrator | Friday 19 September 2025 07:18:53 +0000 (0:00:00.914) 0:01:42.715 ****** 2025-09-19 07:26:27.928661 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 07:26:27.928672 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.928683 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 07:26:27.928693 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.928704 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-19 07:26:27.928715 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-19 07:26:27.928726 | orchestrator | 2025-09-19 07:26:27.928736 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-19 07:26:27.928747 | orchestrator | Friday 19 September 2025 07:19:03 +0000 (0:00:10.499) 0:01:53.215 ****** 2025-09-19 07:26:27.928758 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.928769 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.928779 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.928790 | orchestrator | 2025-09-19 07:26:27.928801 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-19 07:26:27.928811 | orchestrator | Friday 19 September 2025 07:19:04 +0000 (0:00:00.671) 0:01:53.887 ****** 2025-09-19 07:26:27.928822 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 07:26:27.928833 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.928844 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 07:26:27.928854 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.928865 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 07:26:27.928875 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.928886 | orchestrator | 2025-09-19 07:26:27.928897 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-19 07:26:27.928908 | orchestrator | Friday 19 September 2025 07:19:05 +0000 (0:00:01.013) 0:01:54.901 ****** 2025-09-19 07:26:27.928918 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.928929 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.928939 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.928950 | orchestrator | 2025-09-19 07:26:27.928961 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-19 07:26:27.928971 | orchestrator | Friday 19 September 2025 07:19:06 +0000 (0:00:00.637) 0:01:55.538 ****** 2025-09-19 07:26:27.928982 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.928993 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.929003 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.929022 | orchestrator | 2025-09-19 07:26:27.929033 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-19 07:26:27.929044 | orchestrator | Friday 19 September 2025 07:19:07 +0000 (0:00:01.368) 0:01:56.907 ****** 2025-09-19 07:26:27.929054 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.929065 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.929083 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.929095 | orchestrator | 2025-09-19 07:26:27.929105 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-19 07:26:27.929116 | orchestrator | Friday 19 September 2025 07:19:10 +0000 (0:00:03.074) 0:01:59.981 ****** 2025-09-19 07:26:27.929127 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.929138 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.929148 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:27.929158 | orchestrator | 2025-09-19 07:26:27.929169 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 07:26:27.929180 | orchestrator | Friday 19 September 2025 07:19:32 +0000 (0:00:21.762) 0:02:21.743 ****** 2025-09-19 07:26:27.929190 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.929201 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.929211 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:27.929222 | orchestrator | 2025-09-19 07:26:27.929232 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 07:26:27.929243 | orchestrator | Friday 19 September 2025 07:19:45 +0000 (0:00:12.784) 0:02:34.528 ****** 2025-09-19 07:26:27.929253 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:27.929264 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.929275 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.929285 | orchestrator | 2025-09-19 07:26:27.929346 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-19 07:26:27.929357 | orchestrator | Friday 19 September 2025 07:19:46 +0000 (0:00:01.047) 0:02:35.575 ****** 2025-09-19 07:26:27.929368 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.929378 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.929389 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.929400 | orchestrator | 2025-09-19 07:26:27.929410 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-19 07:26:27.929421 | orchestrator | Friday 19 September 2025 07:19:57 +0000 (0:00:11.727) 0:02:47.303 ****** 2025-09-19 07:26:27.929431 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.929442 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.929459 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.929470 | orchestrator | 2025-09-19 07:26:27.929481 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-19 07:26:27.929491 | orchestrator | Friday 19 September 2025 07:19:58 +0000 (0:00:01.055) 0:02:48.358 ****** 2025-09-19 07:26:27.929502 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.929512 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.929523 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.929533 | orchestrator | 2025-09-19 07:26:27.929544 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-19 07:26:27.929555 | orchestrator | 2025-09-19 07:26:27.929565 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 07:26:27.929576 | orchestrator | Friday 19 September 2025 07:19:59 +0000 (0:00:00.518) 0:02:48.877 ****** 2025-09-19 07:26:27.929587 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:26:27.929599 | orchestrator | 2025-09-19 07:26:27.929609 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-19 07:26:27.929620 | orchestrator | Friday 19 September 2025 07:20:00 +0000 (0:00:00.540) 0:02:49.417 ****** 2025-09-19 07:26:27.929631 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-19 07:26:27.929641 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-19 07:26:27.929661 | orchestrator | 2025-09-19 07:26:27.929672 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-19 07:26:27.929683 | orchestrator | Friday 19 September 2025 07:20:03 +0000 (0:00:03.402) 0:02:52.820 ****** 2025-09-19 07:26:27.929694 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-19 07:26:27.929706 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-19 07:26:27.929717 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-19 07:26:27.929728 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-19 07:26:27.929739 | orchestrator | 2025-09-19 07:26:27.929750 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-19 07:26:27.929761 | orchestrator | Friday 19 September 2025 07:20:10 +0000 (0:00:07.019) 0:02:59.840 ****** 2025-09-19 07:26:27.929771 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:26:27.929782 | orchestrator | 2025-09-19 07:26:27.929793 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-19 07:26:27.929803 | orchestrator | Friday 19 September 2025 07:20:14 +0000 (0:00:03.607) 0:03:03.447 ****** 2025-09-19 07:26:27.929814 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:26:27.929825 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-19 07:26:27.929836 | orchestrator | 2025-09-19 07:26:27.929846 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-19 07:26:27.929857 | orchestrator | Friday 19 September 2025 07:20:18 +0000 (0:00:04.078) 0:03:07.526 ****** 2025-09-19 07:26:27.929867 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:26:27.929878 | orchestrator | 2025-09-19 07:26:27.929888 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-19 07:26:27.929899 | orchestrator | Friday 19 September 2025 07:20:22 +0000 (0:00:03.933) 0:03:11.460 ****** 2025-09-19 07:26:27.929910 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-19 07:26:27.929920 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-19 07:26:27.929931 | orchestrator | 2025-09-19 07:26:27.929942 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-19 07:26:27.929959 | orchestrator | Friday 19 September 2025 07:20:30 +0000 (0:00:08.493) 0:03:19.953 ****** 2025-09-19 07:26:27.929977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.929999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.930068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.930095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.930109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.930126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.930145 | orchestrator | 2025-09-19 07:26:27.930157 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-19 07:26:27.930167 | orchestrator | Friday 19 September 2025 07:20:32 +0000 (0:00:01.461) 0:03:21.415 ****** 2025-09-19 07:26:27.930178 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.930189 | orchestrator | 2025-09-19 07:26:27.930200 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-19 07:26:27.930210 | orchestrator | Friday 19 September 2025 07:20:32 +0000 (0:00:00.138) 0:03:21.554 ****** 2025-09-19 07:26:27.930221 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.930232 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.930242 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.930253 | orchestrator | 2025-09-19 07:26:27.930264 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-19 07:26:27.930275 | orchestrator | Friday 19 September 2025 07:20:32 +0000 (0:00:00.568) 0:03:22.123 ****** 2025-09-19 07:26:27.930285 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 07:26:27.930313 | orchestrator | 2025-09-19 07:26:27.930324 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-19 07:26:27.930335 | orchestrator | Friday 19 September 2025 07:20:34 +0000 (0:00:01.355) 0:03:23.478 ****** 2025-09-19 07:26:27.930346 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.930357 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.930367 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.930378 | orchestrator | 2025-09-19 07:26:27.930389 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 07:26:27.930399 | orchestrator | Friday 19 September 2025 07:20:34 +0000 (0:00:00.415) 0:03:23.894 ****** 2025-09-19 07:26:27.930410 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:26:27.930421 | orchestrator | 2025-09-19 07:26:27.930432 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-19 07:26:27.930443 | orchestrator | Friday 19 September 2025 07:20:35 +0000 (0:00:01.257) 0:03:25.152 ****** 2025-09-19 07:26:27.930455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.930477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.930501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.930515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.930527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.930544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.930556 | orchestrator | 2025-09-19 07:26:27.930567 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-19 07:26:27.930578 | orchestrator | Friday 19 September 2025 07:20:38 +0000 (0:00:02.869) 0:03:28.021 ****** 2025-09-19 07:26:27.930601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:26:27.930620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.930632 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.930644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:26:27.930656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.930667 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.930687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:26:27.930710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.930722 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.930733 | orchestrator | 2025-09-19 07:26:27.930744 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-19 07:26:27.930754 | orchestrator | Friday 19 September 2025 07:20:40 +0000 (0:00:01.751) 0:03:29.772 ****** 2025-09-19 07:26:27.930788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:26:27.930800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.930812 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.931442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:26:27.931563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.931582 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.931596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:26:27.931609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.931621 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.931633 | orchestrator | 2025-09-19 07:26:27.931645 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-19 07:26:27.931657 | orchestrator | Friday 19 September 2025 07:20:41 +0000 (0:00:01.354) 0:03:31.127 ****** 2025-09-19 07:26:27.931685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.931707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.931721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.931733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.931759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.931798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.931811 | orchestrator | 2025-09-19 07:26:27.931822 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-19 07:26:27.931833 | orchestrator | Friday 19 September 2025 07:20:44 +0000 (0:00:03.246) 0:03:34.373 ****** 2025-09-19 07:26:27.931850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.931863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.931883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.931902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.931919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.931931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.931942 | orchestrator | 2025-09-19 07:26:27.931953 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-19 07:26:27.931964 | orchestrator | Friday 19 September 2025 07:20:52 +0000 (0:00:07.569) 0:03:41.943 ****** 2025-09-19 07:26:27.931975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:26:27.931999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.932011 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.932024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:26:27.932042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.932055 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.932069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 07:26:27.932098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.932118 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.932134 | orchestrator | 2025-09-19 07:26:27.932146 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-19 07:26:27.932159 | orchestrator | Friday 19 September 2025 07:20:53 +0000 (0:00:01.054) 0:03:42.997 ****** 2025-09-19 07:26:27.932171 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:27.932184 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.932196 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:27.932208 | orchestrator | 2025-09-19 07:26:27.932227 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-19 07:26:27.932240 | orchestrator | Friday 19 September 2025 07:20:55 +0000 (0:00:01.515) 0:03:44.512 ****** 2025-09-19 07:26:27.932253 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.932266 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.932278 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.932315 | orchestrator | 2025-09-19 07:26:27.932327 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-19 07:26:27.932340 | orchestrator | Friday 19 September 2025 07:20:55 +0000 (0:00:00.311) 0:03:44.823 ****** 2025-09-19 07:26:27.932358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.932373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.932400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 07:26:27.932413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.932425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.932441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.932453 | orchestrator | 2025-09-19 07:26:27.932464 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 07:26:27.932475 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:02.854) 0:03:47.678 ****** 2025-09-19 07:26:27.932486 | orchestrator | 2025-09-19 07:26:27.932497 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 07:26:27.932508 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:00.312) 0:03:47.990 ****** 2025-09-19 07:26:27.932531 | orchestrator | 2025-09-19 07:26:27.932542 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 07:26:27.932553 | orchestrator | Friday 19 September 2025 07:20:58 +0000 (0:00:00.300) 0:03:48.290 ****** 2025-09-19 07:26:27.932563 | orchestrator | 2025-09-19 07:26:27.932574 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-19 07:26:27.932584 | orchestrator | Friday 19 September 2025 07:20:59 +0000 (0:00:00.240) 0:03:48.531 ****** 2025-09-19 07:26:27.932595 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.932606 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:27.932616 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:27.932627 | orchestrator | 2025-09-19 07:26:27.932637 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-19 07:26:27.932648 | orchestrator | Friday 19 September 2025 07:21:23 +0000 (0:00:24.182) 0:04:12.713 ****** 2025-09-19 07:26:27.932658 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.932669 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:27.932679 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:27.932690 | orchestrator | 2025-09-19 07:26:27.932700 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-19 07:26:27.932711 | orchestrator | 2025-09-19 07:26:27.932721 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 07:26:27.932732 | orchestrator | Friday 19 September 2025 07:21:37 +0000 (0:00:14.343) 0:04:27.057 ****** 2025-09-19 07:26:27.932744 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:26:27.932756 | orchestrator | 2025-09-19 07:26:27.932766 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 07:26:27.932777 | orchestrator | Friday 19 September 2025 07:21:40 +0000 (0:00:03.001) 0:04:30.059 ****** 2025-09-19 07:26:27.932787 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.932798 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.932809 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.932819 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.932830 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.932840 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.932850 | orchestrator | 2025-09-19 07:26:27.932861 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-19 07:26:27.932872 | orchestrator | Friday 19 September 2025 07:21:41 +0000 (0:00:00.607) 0:04:30.666 ****** 2025-09-19 07:26:27.932882 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.932893 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.932903 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.932914 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:26:27.932924 | orchestrator | 2025-09-19 07:26:27.932935 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 07:26:27.932952 | orchestrator | Friday 19 September 2025 07:21:43 +0000 (0:00:01.971) 0:04:32.637 ****** 2025-09-19 07:26:27.932963 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-19 07:26:27.932974 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-19 07:26:27.932985 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-19 07:26:27.932995 | orchestrator | 2025-09-19 07:26:27.933006 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 07:26:27.933017 | orchestrator | Friday 19 September 2025 07:21:44 +0000 (0:00:01.177) 0:04:33.815 ****** 2025-09-19 07:26:27.933027 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-19 07:26:27.933038 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-19 07:26:27.933049 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-19 07:26:27.933060 | orchestrator | 2025-09-19 07:26:27.933070 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 07:26:27.933090 | orchestrator | Friday 19 September 2025 07:21:46 +0000 (0:00:01.635) 0:04:35.451 ****** 2025-09-19 07:26:27.933101 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-19 07:26:27.933111 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.933122 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-19 07:26:27.933133 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.933143 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-19 07:26:27.933154 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.933165 | orchestrator | 2025-09-19 07:26:27.933175 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-19 07:26:27.933186 | orchestrator | Friday 19 September 2025 07:21:47 +0000 (0:00:01.579) 0:04:37.030 ****** 2025-09-19 07:26:27.933197 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:26:27.933208 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:26:27.933219 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.933234 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:26:27.933245 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:26:27.933256 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.933267 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 07:26:27.933278 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 07:26:27.933320 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 07:26:27.933331 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 07:26:27.933342 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.933353 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 07:26:27.933363 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 07:26:27.933374 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 07:26:27.933384 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 07:26:27.933395 | orchestrator | 2025-09-19 07:26:27.933405 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-19 07:26:27.933416 | orchestrator | Friday 19 September 2025 07:21:49 +0000 (0:00:01.415) 0:04:38.445 ****** 2025-09-19 07:26:27.933427 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:26:27.933438 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.933448 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.933459 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:26:27.933470 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.933480 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:26:27.933491 | orchestrator | 2025-09-19 07:26:27.933502 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-19 07:26:27.933512 | orchestrator | Friday 19 September 2025 07:21:51 +0000 (0:00:02.285) 0:04:40.730 ****** 2025-09-19 07:26:27.933523 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.933533 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.933544 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.933554 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:26:27.933565 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:26:27.933575 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:26:27.933586 | orchestrator | 2025-09-19 07:26:27.933596 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-19 07:26:27.933607 | orchestrator | Friday 19 September 2025 07:21:52 +0000 (0:00:01.585) 0:04:42.316 ****** 2025-09-19 07:26:27.933619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933677 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933689 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933748 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933764 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933777 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933825 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933849 | orchestrator | 2025-09-19 07:26:27.933860 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 07:26:27.933871 | orchestrator | Friday 19 September 2025 07:21:55 +0000 (0:00:02.363) 0:04:44.680 ****** 2025-09-19 07:26:27.933887 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:26:27.933900 | orchestrator | 2025-09-19 07:26:27.933911 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-19 07:26:27.933922 | orchestrator | Friday 19 September 2025 07:21:56 +0000 (0:00:01.183) 0:04:45.864 ****** 2025-09-19 07:26:27.933933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.933992 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:26:27.934008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.934074 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:26:27.934094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:26:27.934106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.934124 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.934136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.934152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.934164 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.934184 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.934195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.934206 | orchestrator | 2025-09-19 07:26:27.934217 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-19 07:26:27.934228 | orchestrator | Friday 19 September 2025 07:22:00 +0000 (0:00:04.508) 0:04:50.373 ****** 2025-09-19 07:26:27.934246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:26:27.934257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:26:27.934274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.934285 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.934328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:26:27.934340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:26:27.934358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.934369 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.934381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:26:27.934397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.934408 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.934420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:26:27.934437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.934448 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.934459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:26:27.934476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:26:27.934488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.934500 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.934516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:26:27.934527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.934544 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.934555 | orchestrator | 2025-09-19 07:26:27.934566 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-19 07:26:27.934577 | orchestrator | Friday 19 September 2025 07:22:03 +0000 (0:00:02.611) 0:04:52.984 ****** 2025-09-19 07:26:27.934588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:26:27.934600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:26:27.934697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.934712 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.934723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:26:27.934747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:26:27.934759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.934770 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.934781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:26:27.934793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.934804 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.934822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:26:27.934834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:26:27.934857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:26:27.934869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.934880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.934892 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.934903 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.934914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:26:27.934931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.934943 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.934954 | orchestrator | 2025-09-19 07:26:27.934965 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 07:26:27.934976 | orchestrator | Friday 19 September 2025 07:22:06 +0000 (0:00:02.490) 0:04:55.475 ****** 2025-09-19 07:26:27.934987 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.934997 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.935014 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.935025 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 07:26:27.935036 | orchestrator | 2025-09-19 07:26:27.935047 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-19 07:26:27.935057 | orchestrator | Friday 19 September 2025 07:22:07 +0000 (0:00:01.248) 0:04:56.724 ****** 2025-09-19 07:26:27.935068 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 07:26:27.935079 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 07:26:27.935089 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 07:26:27.935100 | orchestrator | 2025-09-19 07:26:27.935111 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-19 07:26:27.935121 | orchestrator | Friday 19 September 2025 07:22:08 +0000 (0:00:01.234) 0:04:57.959 ****** 2025-09-19 07:26:27.935137 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 07:26:27.935148 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 07:26:27.935158 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 07:26:27.935169 | orchestrator | 2025-09-19 07:26:27.935179 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-19 07:26:27.935190 | orchestrator | Friday 19 September 2025 07:22:09 +0000 (0:00:00.983) 0:04:58.943 ****** 2025-09-19 07:26:27.935200 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:26:27.935211 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:26:27.935222 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:26:27.935232 | orchestrator | 2025-09-19 07:26:27.935243 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-19 07:26:27.935253 | orchestrator | Friday 19 September 2025 07:22:10 +0000 (0:00:00.595) 0:04:59.538 ****** 2025-09-19 07:26:27.935264 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:26:27.935275 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:26:27.935285 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:26:27.935314 | orchestrator | 2025-09-19 07:26:27.935325 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-19 07:26:27.935336 | orchestrator | Friday 19 September 2025 07:22:10 +0000 (0:00:00.860) 0:05:00.398 ****** 2025-09-19 07:26:27.935347 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 07:26:27.935358 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 07:26:27.935368 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 07:26:27.935379 | orchestrator | 2025-09-19 07:26:27.935390 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-19 07:26:27.935400 | orchestrator | Friday 19 September 2025 07:22:12 +0000 (0:00:01.270) 0:05:01.669 ****** 2025-09-19 07:26:27.935411 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 07:26:27.935421 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 07:26:27.935432 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 07:26:27.935443 | orchestrator | 2025-09-19 07:26:27.935453 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-19 07:26:27.935464 | orchestrator | Friday 19 September 2025 07:22:13 +0000 (0:00:01.152) 0:05:02.822 ****** 2025-09-19 07:26:27.935474 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 07:26:27.935485 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 07:26:27.935496 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 07:26:27.935507 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-19 07:26:27.935517 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-19 07:26:27.935528 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-19 07:26:27.935538 | orchestrator | 2025-09-19 07:26:27.935549 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-19 07:26:27.935559 | orchestrator | Friday 19 September 2025 07:22:17 +0000 (0:00:04.214) 0:05:07.036 ****** 2025-09-19 07:26:27.935577 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.935588 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.935599 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.935610 | orchestrator | 2025-09-19 07:26:27.935620 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-19 07:26:27.935631 | orchestrator | Friday 19 September 2025 07:22:18 +0000 (0:00:00.705) 0:05:07.742 ****** 2025-09-19 07:26:27.935641 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.935652 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.935662 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.935673 | orchestrator | 2025-09-19 07:26:27.935684 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-19 07:26:27.935694 | orchestrator | Friday 19 September 2025 07:22:18 +0000 (0:00:00.389) 0:05:08.132 ****** 2025-09-19 07:26:27.935705 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:26:27.935716 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:26:27.935727 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:26:27.935738 | orchestrator | 2025-09-19 07:26:27.935755 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-19 07:26:27.935766 | orchestrator | Friday 19 September 2025 07:22:20 +0000 (0:00:01.295) 0:05:09.427 ****** 2025-09-19 07:26:27.935777 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 07:26:27.935788 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 07:26:27.935799 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 07:26:27.935810 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 07:26:27.935821 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 07:26:27.935832 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 07:26:27.935842 | orchestrator | 2025-09-19 07:26:27.935853 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-19 07:26:27.935864 | orchestrator | Friday 19 September 2025 07:22:23 +0000 (0:00:03.612) 0:05:13.040 ****** 2025-09-19 07:26:27.935874 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 07:26:27.935885 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 07:26:27.935895 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 07:26:27.935906 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 07:26:27.935922 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:26:27.935933 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 07:26:27.935943 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:26:27.935954 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 07:26:27.935964 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:26:27.935975 | orchestrator | 2025-09-19 07:26:27.935986 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-19 07:26:27.935996 | orchestrator | Friday 19 September 2025 07:22:27 +0000 (0:00:03.933) 0:05:16.973 ****** 2025-09-19 07:26:27.936007 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.936018 | orchestrator | 2025-09-19 07:26:27.936028 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-19 07:26:27.936039 | orchestrator | Friday 19 September 2025 07:22:27 +0000 (0:00:00.143) 0:05:17.116 ****** 2025-09-19 07:26:27.936050 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.936061 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.936071 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.936089 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.936099 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.936110 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.936121 | orchestrator | 2025-09-19 07:26:27.936132 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-19 07:26:27.936142 | orchestrator | Friday 19 September 2025 07:22:28 +0000 (0:00:00.580) 0:05:17.696 ****** 2025-09-19 07:26:27.936153 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 07:26:27.936165 | orchestrator | 2025-09-19 07:26:27.936175 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-19 07:26:27.936186 | orchestrator | Friday 19 September 2025 07:22:28 +0000 (0:00:00.675) 0:05:18.372 ****** 2025-09-19 07:26:27.936196 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.936207 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.936218 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.936228 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.936239 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.936249 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.936260 | orchestrator | 2025-09-19 07:26:27.936271 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-19 07:26:27.936282 | orchestrator | Friday 19 September 2025 07:22:29 +0000 (0:00:00.783) 0:05:19.155 ****** 2025-09-19 07:26:27.936350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936463 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936525 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936541 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936551 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936561 | orchestrator | 2025-09-19 07:26:27.936571 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-19 07:26:27.936580 | orchestrator | Friday 19 September 2025 07:22:33 +0000 (0:00:03.885) 0:05:23.041 ****** 2025-09-19 07:26:27.936605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:26:27.936615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:26:27.936625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:26:27.936635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:26:27.936650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:26:27.936661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:26:27.936682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936702 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.936790 | orchestrator | 2025-09-19 07:26:27.936800 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-19 07:26:27.936810 | orchestrator | Friday 19 September 2025 07:22:39 +0000 (0:00:06.204) 0:05:29.245 ****** 2025-09-19 07:26:27.936819 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.936829 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.936839 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.936848 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.936858 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.936867 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.936876 | orchestrator | 2025-09-19 07:26:27.936886 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-19 07:26:27.936896 | orchestrator | Friday 19 September 2025 07:22:41 +0000 (0:00:01.266) 0:05:30.511 ****** 2025-09-19 07:26:27.936906 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 07:26:27.936915 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 07:26:27.936925 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 07:26:27.936934 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 07:26:27.936949 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 07:26:27.936966 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.936976 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 07:26:27.936985 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 07:26:27.936995 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 07:26:27.937004 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.937013 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 07:26:27.937023 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.937032 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 07:26:27.937042 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 07:26:27.937051 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 07:26:27.937061 | orchestrator | 2025-09-19 07:26:27.937071 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-19 07:26:27.937080 | orchestrator | Friday 19 September 2025 07:22:45 +0000 (0:00:03.891) 0:05:34.403 ****** 2025-09-19 07:26:27.937090 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.937100 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.937109 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.937118 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.937128 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.937138 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.937147 | orchestrator | 2025-09-19 07:26:27.937157 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-19 07:26:27.937172 | orchestrator | Friday 19 September 2025 07:22:45 +0000 (0:00:00.889) 0:05:35.293 ****** 2025-09-19 07:26:27.937182 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 07:26:27.937191 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 07:26:27.937201 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 07:26:27.937210 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 07:26:27.937220 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 07:26:27.937229 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 07:26:27.937239 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 07:26:27.937248 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 07:26:27.937257 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.937267 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 07:26:27.937276 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 07:26:27.937285 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 07:26:27.937350 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.937360 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 07:26:27.937369 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 07:26:27.937386 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.937395 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 07:26:27.937405 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 07:26:27.937414 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 07:26:27.937423 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 07:26:27.937433 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 07:26:27.937442 | orchestrator | 2025-09-19 07:26:27.937451 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-19 07:26:27.937461 | orchestrator | Friday 19 September 2025 07:22:53 +0000 (0:00:07.634) 0:05:42.927 ****** 2025-09-19 07:26:27.937470 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 07:26:27.937480 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 07:26:27.937495 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 07:26:27.937505 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 07:26:27.937515 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 07:26:27.937525 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 07:26:27.937534 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 07:26:27.937544 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 07:26:27.937553 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 07:26:27.937563 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 07:26:27.937572 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 07:26:27.937582 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 07:26:27.937591 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 07:26:27.937601 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.937610 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 07:26:27.937619 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.937629 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 07:26:27.937638 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.937653 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 07:26:27.937663 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 07:26:27.937673 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 07:26:27.937682 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 07:26:27.937691 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 07:26:27.937701 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 07:26:27.937710 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 07:26:27.937720 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 07:26:27.937729 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 07:26:27.937745 | orchestrator | 2025-09-19 07:26:27.937755 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-19 07:26:27.937764 | orchestrator | Friday 19 September 2025 07:23:00 +0000 (0:00:07.424) 0:05:50.352 ****** 2025-09-19 07:26:27.937774 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.937783 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.937793 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.937802 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.937811 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.937821 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.937830 | orchestrator | 2025-09-19 07:26:27.937840 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-19 07:26:27.937849 | orchestrator | Friday 19 September 2025 07:23:01 +0000 (0:00:00.748) 0:05:51.100 ****** 2025-09-19 07:26:27.937858 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.937868 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.937877 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.937886 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.937893 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.937901 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.937908 | orchestrator | 2025-09-19 07:26:27.937916 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-19 07:26:27.937924 | orchestrator | Friday 19 September 2025 07:23:02 +0000 (0:00:00.586) 0:05:51.687 ****** 2025-09-19 07:26:27.937932 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:26:27.937939 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.937947 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.937955 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.937962 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:26:27.937970 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:26:27.937978 | orchestrator | 2025-09-19 07:26:27.937986 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-19 07:26:27.937993 | orchestrator | Friday 19 September 2025 07:23:05 +0000 (0:00:03.540) 0:05:55.227 ****** 2025-09-19 07:26:27.938007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:26:27.938047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:26:27.938063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.938082 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.938090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:26:27.938099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:26:27.938107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.938115 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.938129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 07:26:27.938138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 07:26:27.938156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.938164 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.938172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:26:27.938181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.938189 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.938197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:26:27.938210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 07:26:27.938218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.938235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 07:26:27.938244 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.938251 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.938259 | orchestrator | 2025-09-19 07:26:27.938267 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-19 07:26:27.938275 | orchestrator | Friday 19 September 2025 07:23:06 +0000 (0:00:01.061) 0:05:56.289 ****** 2025-09-19 07:26:27.938283 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-19 07:26:27.938305 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-19 07:26:27.938313 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.938321 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-19 07:26:27.938328 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-19 07:26:27.938336 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.938344 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-19 07:26:27.938351 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-19 07:26:27.938359 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.938367 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-19 07:26:27.938374 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-19 07:26:27.938382 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.938389 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-19 07:26:27.938397 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-19 07:26:27.938405 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.938412 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-19 07:26:27.938420 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-19 07:26:27.938428 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.938435 | orchestrator | 2025-09-19 07:26:27.938443 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-19 07:26:27.938451 | orchestrator | Friday 19 September 2025 07:23:07 +0000 (0:00:00.706) 0:05:56.995 ****** 2025-09-19 07:26:27.938459 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938501 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938561 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938611 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 07:26:27.938632 | orchestrator | 2025-09-19 07:26:27.938640 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 07:26:27.938648 | orchestrator | Friday 19 September 2025 07:23:10 +0000 (0:00:02.991) 0:05:59.986 ****** 2025-09-19 07:26:27.938656 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.938664 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.938672 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.938679 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.938687 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.938695 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.938703 | orchestrator | 2025-09-19 07:26:27.938714 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 07:26:27.938722 | orchestrator | Friday 19 September 2025 07:23:11 +0000 (0:00:00.632) 0:06:00.619 ****** 2025-09-19 07:26:27.938730 | orchestrator | 2025-09-19 07:26:27.938738 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 07:26:27.938745 | orchestrator | Friday 19 September 2025 07:23:11 +0000 (0:00:00.123) 0:06:00.742 ****** 2025-09-19 07:26:27.938753 | orchestrator | 2025-09-19 07:26:27.938761 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 07:26:27.938769 | orchestrator | Friday 19 September 2025 07:23:11 +0000 (0:00:00.121) 0:06:00.864 ****** 2025-09-19 07:26:27.938776 | orchestrator | 2025-09-19 07:26:27.938784 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 07:26:27.938792 | orchestrator | Friday 19 September 2025 07:23:11 +0000 (0:00:00.124) 0:06:00.988 ****** 2025-09-19 07:26:27.938800 | orchestrator | 2025-09-19 07:26:27.938807 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 07:26:27.938815 | orchestrator | Friday 19 September 2025 07:23:11 +0000 (0:00:00.124) 0:06:01.113 ****** 2025-09-19 07:26:27.938823 | orchestrator | 2025-09-19 07:26:27.938830 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 07:26:27.938838 | orchestrator | Friday 19 September 2025 07:23:11 +0000 (0:00:00.124) 0:06:01.238 ****** 2025-09-19 07:26:27.938845 | orchestrator | 2025-09-19 07:26:27.938853 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-19 07:26:27.938861 | orchestrator | Friday 19 September 2025 07:23:12 +0000 (0:00:00.226) 0:06:01.464 ****** 2025-09-19 07:26:27.938868 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.938876 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:27.938884 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:27.938896 | orchestrator | 2025-09-19 07:26:27.938904 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-19 07:26:27.938911 | orchestrator | Friday 19 September 2025 07:23:21 +0000 (0:00:08.958) 0:06:10.423 ****** 2025-09-19 07:26:27.938919 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.938927 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:27.938934 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:27.938942 | orchestrator | 2025-09-19 07:26:27.938950 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-19 07:26:27.938958 | orchestrator | Friday 19 September 2025 07:23:37 +0000 (0:00:16.302) 0:06:26.726 ****** 2025-09-19 07:26:27.938965 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:26:27.938973 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:26:27.938980 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:26:27.938988 | orchestrator | 2025-09-19 07:26:27.938996 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-19 07:26:27.939003 | orchestrator | Friday 19 September 2025 07:24:03 +0000 (0:00:26.525) 0:06:53.251 ****** 2025-09-19 07:26:27.939011 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:26:27.939019 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:26:27.939026 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:26:27.939034 | orchestrator | 2025-09-19 07:26:27.939042 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-19 07:26:27.939049 | orchestrator | Friday 19 September 2025 07:24:38 +0000 (0:00:34.868) 0:07:28.119 ****** 2025-09-19 07:26:27.939057 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-09-19 07:26:27.939065 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-09-19 07:26:27.939072 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-09-19 07:26:27.939080 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:26:27.939088 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:26:27.939095 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:26:27.939103 | orchestrator | 2025-09-19 07:26:27.939116 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-19 07:26:27.939124 | orchestrator | Friday 19 September 2025 07:24:45 +0000 (0:00:06.357) 0:07:34.477 ****** 2025-09-19 07:26:27.939131 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:26:27.939139 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:26:27.939147 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:26:27.939155 | orchestrator | 2025-09-19 07:26:27.939162 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-19 07:26:27.939170 | orchestrator | Friday 19 September 2025 07:24:45 +0000 (0:00:00.905) 0:07:35.382 ****** 2025-09-19 07:26:27.939178 | orchestrator | changed: [testbed-node-3] 2025-09-19 07:26:27.939185 | orchestrator | changed: [testbed-node-5] 2025-09-19 07:26:27.939193 | orchestrator | changed: [testbed-node-4] 2025-09-19 07:26:27.939201 | orchestrator | 2025-09-19 07:26:27.939209 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-19 07:26:27.939216 | orchestrator | Friday 19 September 2025 07:25:12 +0000 (0:00:26.074) 0:08:01.457 ****** 2025-09-19 07:26:27.939224 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.939232 | orchestrator | 2025-09-19 07:26:27.939239 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-19 07:26:27.939247 | orchestrator | Friday 19 September 2025 07:25:12 +0000 (0:00:00.131) 0:08:01.589 ****** 2025-09-19 07:26:27.939255 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.939262 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.939270 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.939277 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.939285 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.939305 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-19 07:26:27.939318 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:26:27.939326 | orchestrator | 2025-09-19 07:26:27.939334 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-19 07:26:27.939345 | orchestrator | Friday 19 September 2025 07:25:34 +0000 (0:00:22.466) 0:08:24.055 ****** 2025-09-19 07:26:27.939353 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.939361 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.939369 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.939376 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.939384 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.939392 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.939399 | orchestrator | 2025-09-19 07:26:27.939407 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-19 07:26:27.939415 | orchestrator | Friday 19 September 2025 07:25:46 +0000 (0:00:11.802) 0:08:35.857 ****** 2025-09-19 07:26:27.939422 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.939430 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.939438 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.939445 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.939453 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.939461 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-09-19 07:26:27.939469 | orchestrator | 2025-09-19 07:26:27.939476 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 07:26:27.939484 | orchestrator | Friday 19 September 2025 07:25:50 +0000 (0:00:03.921) 0:08:39.779 ****** 2025-09-19 07:26:27.939492 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:26:27.939500 | orchestrator | 2025-09-19 07:26:27.939507 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 07:26:27.939515 | orchestrator | Friday 19 September 2025 07:26:03 +0000 (0:00:13.604) 0:08:53.383 ****** 2025-09-19 07:26:27.939523 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:26:27.939531 | orchestrator | 2025-09-19 07:26:27.939538 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-19 07:26:27.939546 | orchestrator | Friday 19 September 2025 07:26:05 +0000 (0:00:01.397) 0:08:54.781 ****** 2025-09-19 07:26:27.939553 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.939561 | orchestrator | 2025-09-19 07:26:27.939569 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-19 07:26:27.939577 | orchestrator | Friday 19 September 2025 07:26:06 +0000 (0:00:01.309) 0:08:56.091 ****** 2025-09-19 07:26:27.939584 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 07:26:27.939592 | orchestrator | 2025-09-19 07:26:27.939600 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-19 07:26:27.939607 | orchestrator | Friday 19 September 2025 07:26:18 +0000 (0:00:12.239) 0:09:08.330 ****** 2025-09-19 07:26:27.939615 | orchestrator | ok: [testbed-node-3] 2025-09-19 07:26:27.939623 | orchestrator | ok: [testbed-node-4] 2025-09-19 07:26:27.939630 | orchestrator | ok: [testbed-node-5] 2025-09-19 07:26:27.939638 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:26:27.939646 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:26:27.939653 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:26:27.939661 | orchestrator | 2025-09-19 07:26:27.939668 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-19 07:26:27.939676 | orchestrator | 2025-09-19 07:26:27.939684 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-19 07:26:27.939692 | orchestrator | Friday 19 September 2025 07:26:20 +0000 (0:00:01.939) 0:09:10.270 ****** 2025-09-19 07:26:27.939699 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:26:27.939707 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:26:27.939715 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:26:27.939728 | orchestrator | 2025-09-19 07:26:27.939736 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-19 07:26:27.939743 | orchestrator | 2025-09-19 07:26:27.939751 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-19 07:26:27.939759 | orchestrator | Friday 19 September 2025 07:26:22 +0000 (0:00:01.163) 0:09:11.433 ****** 2025-09-19 07:26:27.939767 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.939774 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.939782 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.939790 | orchestrator | 2025-09-19 07:26:27.939803 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-19 07:26:27.939811 | orchestrator | 2025-09-19 07:26:27.939818 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-19 07:26:27.939826 | orchestrator | Friday 19 September 2025 07:26:22 +0000 (0:00:00.561) 0:09:11.995 ****** 2025-09-19 07:26:27.939834 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-19 07:26:27.939842 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-19 07:26:27.939850 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-19 07:26:27.939858 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-19 07:26:27.939866 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-19 07:26:27.939873 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-19 07:26:27.939881 | orchestrator | skipping: [testbed-node-3] 2025-09-19 07:26:27.939889 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-19 07:26:27.939897 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-19 07:26:27.939904 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-19 07:26:27.939912 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-19 07:26:27.939920 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-19 07:26:27.939927 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-19 07:26:27.939935 | orchestrator | skipping: [testbed-node-4] 2025-09-19 07:26:27.939943 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-19 07:26:27.939951 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-19 07:26:27.939958 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-19 07:26:27.939966 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-19 07:26:27.939980 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-19 07:26:27.939988 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-19 07:26:27.939996 | orchestrator | skipping: [testbed-node-5] 2025-09-19 07:26:27.940004 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-19 07:26:27.940011 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-19 07:26:27.940019 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-19 07:26:27.940027 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-19 07:26:27.940034 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-19 07:26:27.940042 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-19 07:26:27.940050 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.940058 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-19 07:26:27.940065 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-19 07:26:27.940073 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-19 07:26:27.940081 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-19 07:26:27.940088 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-19 07:26:27.940096 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-19 07:26:27.940109 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.940117 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-19 07:26:27.940125 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-19 07:26:27.940133 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-19 07:26:27.940140 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-19 07:26:27.940148 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-19 07:26:27.940155 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-19 07:26:27.940163 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.940171 | orchestrator | 2025-09-19 07:26:27.940179 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-19 07:26:27.940187 | orchestrator | 2025-09-19 07:26:27.940194 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-19 07:26:27.940202 | orchestrator | Friday 19 September 2025 07:26:24 +0000 (0:00:01.430) 0:09:13.426 ****** 2025-09-19 07:26:27.940210 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-19 07:26:27.940217 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-19 07:26:27.940225 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.940233 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-19 07:26:27.940240 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-19 07:26:27.940248 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.940256 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-19 07:26:27.940263 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-19 07:26:27.940271 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.940279 | orchestrator | 2025-09-19 07:26:27.940320 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-19 07:26:27.940330 | orchestrator | 2025-09-19 07:26:27.940337 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-19 07:26:27.940345 | orchestrator | Friday 19 September 2025 07:26:24 +0000 (0:00:00.731) 0:09:14.157 ****** 2025-09-19 07:26:27.940353 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.940361 | orchestrator | 2025-09-19 07:26:27.940368 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-19 07:26:27.940377 | orchestrator | 2025-09-19 07:26:27.940384 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-19 07:26:27.940392 | orchestrator | Friday 19 September 2025 07:26:25 +0000 (0:00:00.731) 0:09:14.888 ****** 2025-09-19 07:26:27.940400 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:26:27.940414 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:26:27.940422 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:26:27.940429 | orchestrator | 2025-09-19 07:26:27.940437 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:26:27.940445 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:26:27.940453 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-19 07:26:27.940462 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-19 07:26:27.940469 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-19 07:26:27.940477 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 07:26:27.940485 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-19 07:26:27.940498 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-19 07:26:27.940506 | orchestrator | 2025-09-19 07:26:27.940513 | orchestrator | 2025-09-19 07:26:27.940521 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:26:27.940533 | orchestrator | Friday 19 September 2025 07:26:25 +0000 (0:00:00.442) 0:09:15.331 ****** 2025-09-19 07:26:27.940541 | orchestrator | =============================================================================== 2025-09-19 07:26:27.940549 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 34.87s 2025-09-19 07:26:27.940557 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.73s 2025-09-19 07:26:27.940564 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.53s 2025-09-19 07:26:27.940572 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 26.07s 2025-09-19 07:26:27.940580 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.18s 2025-09-19 07:26:27.940588 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.47s 2025-09-19 07:26:27.940595 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.76s 2025-09-19 07:26:27.940603 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.89s 2025-09-19 07:26:27.940611 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.30s 2025-09-19 07:26:27.940618 | orchestrator | nova : Restart nova-api container -------------------------------------- 14.34s 2025-09-19 07:26:27.940626 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.07s 2025-09-19 07:26:27.940634 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.60s 2025-09-19 07:26:27.940641 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.78s 2025-09-19 07:26:27.940649 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.24s 2025-09-19 07:26:27.940657 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.17s 2025-09-19 07:26:27.940664 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.80s 2025-09-19 07:26:27.940672 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.73s 2025-09-19 07:26:27.940679 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 10.50s 2025-09-19 07:26:27.940687 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 8.96s 2025-09-19 07:26:27.940695 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.49s 2025-09-19 07:26:27.940703 | orchestrator | 2025-09-19 07:26:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:30.968284 | orchestrator | 2025-09-19 07:26:30 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:30.968435 | orchestrator | 2025-09-19 07:26:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:34.021268 | orchestrator | 2025-09-19 07:26:34 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:34.021465 | orchestrator | 2025-09-19 07:26:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:37.057179 | orchestrator | 2025-09-19 07:26:37 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:37.057355 | orchestrator | 2025-09-19 07:26:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:40.097796 | orchestrator | 2025-09-19 07:26:40 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:40.097896 | orchestrator | 2025-09-19 07:26:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:43.140279 | orchestrator | 2025-09-19 07:26:43 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:43.140485 | orchestrator | 2025-09-19 07:26:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:46.189554 | orchestrator | 2025-09-19 07:26:46 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:46.189659 | orchestrator | 2025-09-19 07:26:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:49.223384 | orchestrator | 2025-09-19 07:26:49 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:49.223487 | orchestrator | 2025-09-19 07:26:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:52.265941 | orchestrator | 2025-09-19 07:26:52 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:52.266080 | orchestrator | 2025-09-19 07:26:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:55.317926 | orchestrator | 2025-09-19 07:26:55 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:55.318364 | orchestrator | 2025-09-19 07:26:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:26:58.369653 | orchestrator | 2025-09-19 07:26:58 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:26:58.369758 | orchestrator | 2025-09-19 07:26:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:01.409199 | orchestrator | 2025-09-19 07:27:01 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:01.409294 | orchestrator | 2025-09-19 07:27:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:04.456229 | orchestrator | 2025-09-19 07:27:04 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:04.456370 | orchestrator | 2025-09-19 07:27:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:07.499232 | orchestrator | 2025-09-19 07:27:07 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:07.499372 | orchestrator | 2025-09-19 07:27:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:10.534764 | orchestrator | 2025-09-19 07:27:10 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:10.534865 | orchestrator | 2025-09-19 07:27:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:13.578762 | orchestrator | 2025-09-19 07:27:13 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:13.578938 | orchestrator | 2025-09-19 07:27:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:16.626709 | orchestrator | 2025-09-19 07:27:16 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:16.626830 | orchestrator | 2025-09-19 07:27:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:19.669039 | orchestrator | 2025-09-19 07:27:19 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:19.669140 | orchestrator | 2025-09-19 07:27:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:22.711644 | orchestrator | 2025-09-19 07:27:22 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:22.711756 | orchestrator | 2025-09-19 07:27:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:25.760957 | orchestrator | 2025-09-19 07:27:25 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:25.761056 | orchestrator | 2025-09-19 07:27:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:28.812010 | orchestrator | 2025-09-19 07:27:28 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:28.812138 | orchestrator | 2025-09-19 07:27:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:31.858629 | orchestrator | 2025-09-19 07:27:31 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:31.858729 | orchestrator | 2025-09-19 07:27:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:34.905121 | orchestrator | 2025-09-19 07:27:34 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:34.905289 | orchestrator | 2025-09-19 07:27:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:37.968524 | orchestrator | 2025-09-19 07:27:37 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:37.968617 | orchestrator | 2025-09-19 07:27:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:41.013107 | orchestrator | 2025-09-19 07:27:41 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:41.013218 | orchestrator | 2025-09-19 07:27:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:44.064292 | orchestrator | 2025-09-19 07:27:44 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:44.064454 | orchestrator | 2025-09-19 07:27:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:47.114988 | orchestrator | 2025-09-19 07:27:47 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:47.115092 | orchestrator | 2025-09-19 07:27:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:50.147274 | orchestrator | 2025-09-19 07:27:50 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:50.147345 | orchestrator | 2025-09-19 07:27:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:53.192724 | orchestrator | 2025-09-19 07:27:53 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:53.192812 | orchestrator | 2025-09-19 07:27:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:56.248040 | orchestrator | 2025-09-19 07:27:56 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:56.248146 | orchestrator | 2025-09-19 07:27:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:27:59.286100 | orchestrator | 2025-09-19 07:27:59 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:27:59.286192 | orchestrator | 2025-09-19 07:27:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:02.323476 | orchestrator | 2025-09-19 07:28:02 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:02.323571 | orchestrator | 2025-09-19 07:28:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:05.378178 | orchestrator | 2025-09-19 07:28:05 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:05.378252 | orchestrator | 2025-09-19 07:28:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:08.425659 | orchestrator | 2025-09-19 07:28:08 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:08.425790 | orchestrator | 2025-09-19 07:28:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:11.469617 | orchestrator | 2025-09-19 07:28:11 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:11.469839 | orchestrator | 2025-09-19 07:28:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:14.512983 | orchestrator | 2025-09-19 07:28:14 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:14.513078 | orchestrator | 2025-09-19 07:28:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:17.563790 | orchestrator | 2025-09-19 07:28:17 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:17.563907 | orchestrator | 2025-09-19 07:28:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:20.611044 | orchestrator | 2025-09-19 07:28:20 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:20.611141 | orchestrator | 2025-09-19 07:28:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:23.657027 | orchestrator | 2025-09-19 07:28:23 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:23.657128 | orchestrator | 2025-09-19 07:28:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:26.699320 | orchestrator | 2025-09-19 07:28:26 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:26.699405 | orchestrator | 2025-09-19 07:28:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:29.734915 | orchestrator | 2025-09-19 07:28:29 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:29.735017 | orchestrator | 2025-09-19 07:28:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:32.791086 | orchestrator | 2025-09-19 07:28:32 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:32.791152 | orchestrator | 2025-09-19 07:28:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:35.839073 | orchestrator | 2025-09-19 07:28:35 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:35.839155 | orchestrator | 2025-09-19 07:28:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:38.879463 | orchestrator | 2025-09-19 07:28:38 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:38.879553 | orchestrator | 2025-09-19 07:28:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:41.933517 | orchestrator | 2025-09-19 07:28:41 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:41.933619 | orchestrator | 2025-09-19 07:28:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:44.985089 | orchestrator | 2025-09-19 07:28:44 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:44.985181 | orchestrator | 2025-09-19 07:28:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:48.034859 | orchestrator | 2025-09-19 07:28:48 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:48.034960 | orchestrator | 2025-09-19 07:28:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:51.083862 | orchestrator | 2025-09-19 07:28:51 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:51.083958 | orchestrator | 2025-09-19 07:28:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:54.136167 | orchestrator | 2025-09-19 07:28:54 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:54.136262 | orchestrator | 2025-09-19 07:28:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:28:57.184087 | orchestrator | 2025-09-19 07:28:57 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:28:57.184186 | orchestrator | 2025-09-19 07:28:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:00.228376 | orchestrator | 2025-09-19 07:29:00 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:00.228548 | orchestrator | 2025-09-19 07:29:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:03.274628 | orchestrator | 2025-09-19 07:29:03 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:03.274754 | orchestrator | 2025-09-19 07:29:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:06.316995 | orchestrator | 2025-09-19 07:29:06 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:06.317091 | orchestrator | 2025-09-19 07:29:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:09.350737 | orchestrator | 2025-09-19 07:29:09 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:09.350846 | orchestrator | 2025-09-19 07:29:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:12.409745 | orchestrator | 2025-09-19 07:29:12 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:12.409847 | orchestrator | 2025-09-19 07:29:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:15.451309 | orchestrator | 2025-09-19 07:29:15 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:15.451406 | orchestrator | 2025-09-19 07:29:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:18.491990 | orchestrator | 2025-09-19 07:29:18 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:18.492091 | orchestrator | 2025-09-19 07:29:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:21.532736 | orchestrator | 2025-09-19 07:29:21 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:21.532824 | orchestrator | 2025-09-19 07:29:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:24.576627 | orchestrator | 2025-09-19 07:29:24 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:24.576722 | orchestrator | 2025-09-19 07:29:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:27.616012 | orchestrator | 2025-09-19 07:29:27 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:27.616117 | orchestrator | 2025-09-19 07:29:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:30.666415 | orchestrator | 2025-09-19 07:29:30 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:30.666623 | orchestrator | 2025-09-19 07:29:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:33.714304 | orchestrator | 2025-09-19 07:29:33 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:33.714400 | orchestrator | 2025-09-19 07:29:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:36.762922 | orchestrator | 2025-09-19 07:29:36 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:36.763012 | orchestrator | 2025-09-19 07:29:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:39.806328 | orchestrator | 2025-09-19 07:29:39 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:39.806428 | orchestrator | 2025-09-19 07:29:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:42.854927 | orchestrator | 2025-09-19 07:29:42 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:42.855027 | orchestrator | 2025-09-19 07:29:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:45.904326 | orchestrator | 2025-09-19 07:29:45 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:45.904431 | orchestrator | 2025-09-19 07:29:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:48.954186 | orchestrator | 2025-09-19 07:29:48 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state STARTED 2025-09-19 07:29:48.954287 | orchestrator | 2025-09-19 07:29:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 07:29:52.002665 | orchestrator | 2025-09-19 07:29:52 | INFO  | Task d962df5d-5f90-4c5d-8a4e-3664716a7f2a is in state SUCCESS 2025-09-19 07:29:52.004136 | orchestrator | 2025-09-19 07:29:52.004187 | orchestrator | 2025-09-19 07:29:52.004200 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 07:29:52.004213 | orchestrator | 2025-09-19 07:29:52.004224 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 07:29:52.004253 | orchestrator | Friday 19 September 2025 07:25:02 +0000 (0:00:00.299) 0:00:00.299 ****** 2025-09-19 07:29:52.004265 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:29:52.004278 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:29:52.004288 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:29:52.004299 | orchestrator | 2025-09-19 07:29:52.004310 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 07:29:52.004321 | orchestrator | Friday 19 September 2025 07:25:02 +0000 (0:00:00.331) 0:00:00.631 ****** 2025-09-19 07:29:52.004332 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-19 07:29:52.004344 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-19 07:29:52.004355 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-19 07:29:52.004366 | orchestrator | 2025-09-19 07:29:52.004377 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-19 07:29:52.004388 | orchestrator | 2025-09-19 07:29:52.004398 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 07:29:52.004409 | orchestrator | Friday 19 September 2025 07:25:03 +0000 (0:00:00.508) 0:00:01.139 ****** 2025-09-19 07:29:52.004420 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:29:52.004432 | orchestrator | 2025-09-19 07:29:52.004442 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-19 07:29:52.004453 | orchestrator | Friday 19 September 2025 07:25:03 +0000 (0:00:00.529) 0:00:01.668 ****** 2025-09-19 07:29:52.004464 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-19 07:29:52.004475 | orchestrator | 2025-09-19 07:29:52.004485 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-19 07:29:52.004517 | orchestrator | Friday 19 September 2025 07:25:07 +0000 (0:00:03.968) 0:00:05.637 ****** 2025-09-19 07:29:52.004528 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-19 07:29:52.004539 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-19 07:29:52.004550 | orchestrator | 2025-09-19 07:29:52.004560 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-19 07:29:52.004571 | orchestrator | Friday 19 September 2025 07:25:14 +0000 (0:00:07.360) 0:00:12.998 ****** 2025-09-19 07:29:52.004582 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 07:29:52.004593 | orchestrator | 2025-09-19 07:29:52.004718 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-19 07:29:52.004802 | orchestrator | Friday 19 September 2025 07:25:18 +0000 (0:00:03.910) 0:00:16.908 ****** 2025-09-19 07:29:52.004816 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 07:29:52.004829 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-19 07:29:52.004843 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-19 07:29:52.004880 | orchestrator | 2025-09-19 07:29:52.004893 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-19 07:29:52.004907 | orchestrator | Friday 19 September 2025 07:25:27 +0000 (0:00:08.909) 0:00:25.818 ****** 2025-09-19 07:29:52.004918 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 07:29:52.004929 | orchestrator | 2025-09-19 07:29:52.004940 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-19 07:29:52.004950 | orchestrator | Friday 19 September 2025 07:25:31 +0000 (0:00:03.948) 0:00:29.766 ****** 2025-09-19 07:29:52.004962 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-19 07:29:52.004972 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-19 07:29:52.004983 | orchestrator | 2025-09-19 07:29:52.004994 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-19 07:29:52.005005 | orchestrator | Friday 19 September 2025 07:25:40 +0000 (0:00:08.387) 0:00:38.153 ****** 2025-09-19 07:29:52.005015 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-19 07:29:52.005026 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-19 07:29:52.005036 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-19 07:29:52.005047 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-19 07:29:52.005058 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-19 07:29:52.005068 | orchestrator | 2025-09-19 07:29:52.005079 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 07:29:52.005090 | orchestrator | Friday 19 September 2025 07:25:56 +0000 (0:00:16.364) 0:00:54.518 ****** 2025-09-19 07:29:52.005100 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:29:52.005111 | orchestrator | 2025-09-19 07:29:52.005122 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-19 07:29:52.005133 | orchestrator | Friday 19 September 2025 07:25:57 +0000 (0:00:00.590) 0:00:55.108 ****** 2025-09-19 07:29:52.005143 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.005154 | orchestrator | 2025-09-19 07:29:52.005165 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-19 07:29:52.005175 | orchestrator | Friday 19 September 2025 07:26:01 +0000 (0:00:04.862) 0:00:59.971 ****** 2025-09-19 07:29:52.005186 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.005197 | orchestrator | 2025-09-19 07:29:52.005208 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-19 07:29:52.005234 | orchestrator | Friday 19 September 2025 07:26:06 +0000 (0:00:04.937) 0:01:04.908 ****** 2025-09-19 07:29:52.005260 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:29:52.005271 | orchestrator | 2025-09-19 07:29:52.005282 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-19 07:29:52.005300 | orchestrator | Friday 19 September 2025 07:26:10 +0000 (0:00:03.309) 0:01:08.218 ****** 2025-09-19 07:29:52.005311 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-19 07:29:52.005322 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-19 07:29:52.005333 | orchestrator | 2025-09-19 07:29:52.005343 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-19 07:29:52.005354 | orchestrator | Friday 19 September 2025 07:26:20 +0000 (0:00:10.243) 0:01:18.462 ****** 2025-09-19 07:29:52.005365 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-19 07:29:52.005376 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-19 07:29:52.005388 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-19 07:29:52.005400 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-19 07:29:52.005420 | orchestrator | 2025-09-19 07:29:52.005430 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-19 07:29:52.005441 | orchestrator | Friday 19 September 2025 07:26:37 +0000 (0:00:17.252) 0:01:35.714 ****** 2025-09-19 07:29:52.005452 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.005462 | orchestrator | 2025-09-19 07:29:52.005473 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-19 07:29:52.005483 | orchestrator | Friday 19 September 2025 07:26:42 +0000 (0:00:04.901) 0:01:40.616 ****** 2025-09-19 07:29:52.005531 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.005543 | orchestrator | 2025-09-19 07:29:52.005579 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-19 07:29:52.005591 | orchestrator | Friday 19 September 2025 07:26:47 +0000 (0:00:05.461) 0:01:46.078 ****** 2025-09-19 07:29:52.005601 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:29:52.005612 | orchestrator | 2025-09-19 07:29:52.005628 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-19 07:29:52.005639 | orchestrator | Friday 19 September 2025 07:26:48 +0000 (0:00:00.247) 0:01:46.325 ****** 2025-09-19 07:29:52.005650 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.005661 | orchestrator | 2025-09-19 07:29:52.005671 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 07:29:52.005682 | orchestrator | Friday 19 September 2025 07:26:52 +0000 (0:00:04.556) 0:01:50.882 ****** 2025-09-19 07:29:52.005693 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:29:52.005704 | orchestrator | 2025-09-19 07:29:52.005714 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-19 07:29:52.005725 | orchestrator | Friday 19 September 2025 07:26:53 +0000 (0:00:01.002) 0:01:51.885 ****** 2025-09-19 07:29:52.005735 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:29:52.005746 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.005756 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:29:52.005767 | orchestrator | 2025-09-19 07:29:52.005777 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-19 07:29:52.005788 | orchestrator | Friday 19 September 2025 07:26:59 +0000 (0:00:05.832) 0:01:57.717 ****** 2025-09-19 07:29:52.005798 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:29:52.005809 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.005819 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:29:52.005830 | orchestrator | 2025-09-19 07:29:52.005840 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-19 07:29:52.005851 | orchestrator | Friday 19 September 2025 07:27:04 +0000 (0:00:04.733) 0:02:02.450 ****** 2025-09-19 07:29:52.005861 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.005872 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:29:52.005882 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:29:52.005893 | orchestrator | 2025-09-19 07:29:52.005903 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-19 07:29:52.005914 | orchestrator | Friday 19 September 2025 07:27:05 +0000 (0:00:00.840) 0:02:03.291 ****** 2025-09-19 07:29:52.005924 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:29:52.005935 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:29:52.005946 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:29:52.005956 | orchestrator | 2025-09-19 07:29:52.005966 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-19 07:29:52.005977 | orchestrator | Friday 19 September 2025 07:27:07 +0000 (0:00:02.282) 0:02:05.573 ****** 2025-09-19 07:29:52.005988 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:29:52.005999 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:29:52.006009 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.006080 | orchestrator | 2025-09-19 07:29:52.006101 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-19 07:29:52.006112 | orchestrator | Friday 19 September 2025 07:27:08 +0000 (0:00:01.301) 0:02:06.875 ****** 2025-09-19 07:29:52.006123 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.006134 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:29:52.006144 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:29:52.006154 | orchestrator | 2025-09-19 07:29:52.006165 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-19 07:29:52.006176 | orchestrator | Friday 19 September 2025 07:27:09 +0000 (0:00:01.226) 0:02:08.101 ****** 2025-09-19 07:29:52.006187 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.006198 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:29:52.006208 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:29:52.006219 | orchestrator | 2025-09-19 07:29:52.006240 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-19 07:29:52.006251 | orchestrator | Friday 19 September 2025 07:27:12 +0000 (0:00:02.148) 0:02:10.250 ****** 2025-09-19 07:29:52.006262 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.006280 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:29:52.006291 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:29:52.006301 | orchestrator | 2025-09-19 07:29:52.006312 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-19 07:29:52.006323 | orchestrator | Friday 19 September 2025 07:27:13 +0000 (0:00:01.489) 0:02:11.739 ****** 2025-09-19 07:29:52.006334 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:29:52.006344 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:29:52.006355 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:29:52.006366 | orchestrator | 2025-09-19 07:29:52.006377 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-19 07:29:52.006387 | orchestrator | Friday 19 September 2025 07:27:14 +0000 (0:00:00.913) 0:02:12.653 ****** 2025-09-19 07:29:52.006398 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:29:52.006409 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:29:52.006419 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:29:52.006430 | orchestrator | 2025-09-19 07:29:52.006441 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 07:29:52.006451 | orchestrator | Friday 19 September 2025 07:27:17 +0000 (0:00:02.843) 0:02:15.497 ****** 2025-09-19 07:29:52.006462 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:29:52.006473 | orchestrator | 2025-09-19 07:29:52.006484 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-19 07:29:52.006514 | orchestrator | Friday 19 September 2025 07:27:17 +0000 (0:00:00.559) 0:02:16.057 ****** 2025-09-19 07:29:52.006525 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:29:52.006536 | orchestrator | 2025-09-19 07:29:52.006547 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-19 07:29:52.006558 | orchestrator | Friday 19 September 2025 07:27:21 +0000 (0:00:03.980) 0:02:20.037 ****** 2025-09-19 07:29:52.006568 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:29:52.006579 | orchestrator | 2025-09-19 07:29:52.006590 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-19 07:29:52.006601 | orchestrator | Friday 19 September 2025 07:27:25 +0000 (0:00:03.438) 0:02:23.476 ****** 2025-09-19 07:29:52.006612 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-19 07:29:52.006623 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-19 07:29:52.006634 | orchestrator | 2025-09-19 07:29:52.006644 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-19 07:29:52.006655 | orchestrator | Friday 19 September 2025 07:27:33 +0000 (0:00:07.715) 0:02:31.191 ****** 2025-09-19 07:29:52.006666 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:29:52.006676 | orchestrator | 2025-09-19 07:29:52.006688 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-19 07:29:52.006698 | orchestrator | Friday 19 September 2025 07:27:36 +0000 (0:00:03.486) 0:02:34.677 ****** 2025-09-19 07:29:52.006716 | orchestrator | ok: [testbed-node-0] 2025-09-19 07:29:52.006727 | orchestrator | ok: [testbed-node-1] 2025-09-19 07:29:52.006738 | orchestrator | ok: [testbed-node-2] 2025-09-19 07:29:52.006749 | orchestrator | 2025-09-19 07:29:52.006760 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-19 07:29:52.006770 | orchestrator | Friday 19 September 2025 07:27:36 +0000 (0:00:00.353) 0:02:35.031 ****** 2025-09-19 07:29:52.006785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.006811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.006829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.006841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.006854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.006872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.006884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.006897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.006932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.006946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.006958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.006976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.006987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.006999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.007010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.007021 | orchestrator | 2025-09-19 07:29:52.007032 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-19 07:29:52.007043 | orchestrator | Friday 19 September 2025 07:27:39 +0000 (0:00:02.471) 0:02:37.503 ****** 2025-09-19 07:29:52.007055 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:29:52.007065 | orchestrator | 2025-09-19 07:29:52.007093 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-19 07:29:52.007105 | orchestrator | Friday 19 September 2025 07:27:39 +0000 (0:00:00.132) 0:02:37.636 ****** 2025-09-19 07:29:52.007121 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:29:52.007132 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:29:52.007143 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:29:52.007153 | orchestrator | 2025-09-19 07:29:52.007164 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-19 07:29:52.007175 | orchestrator | Friday 19 September 2025 07:27:40 +0000 (0:00:00.577) 0:02:38.213 ****** 2025-09-19 07:29:52.007186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:29:52.007205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:29:52.007217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.007228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.007240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:29:52.007251 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:29:52.007286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:29:52.007299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:29:52.007324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.007335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.007346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:29:52.007357 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:29:52.007368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:29:52.007405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:29:52.007417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.007436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.007448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:29:52.007458 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:29:52.007469 | orchestrator | 2025-09-19 07:29:52.007480 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 07:29:52.007556 | orchestrator | Friday 19 September 2025 07:27:40 +0000 (0:00:00.714) 0:02:38.928 ****** 2025-09-19 07:29:52.007569 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 07:29:52.007579 | orchestrator | 2025-09-19 07:29:52.007590 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-19 07:29:52.007601 | orchestrator | Friday 19 September 2025 07:27:41 +0000 (0:00:00.543) 0:02:39.471 ****** 2025-09-19 07:29:52.007613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.007649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.007670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.007682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.007693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.007704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.007715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.007726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.007750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.007769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.007781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.007792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.007803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.007814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.007832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.007850 | orchestrator | 2025-09-19 07:29:52.007861 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-19 07:29:52.007871 | orchestrator | Friday 19 September 2025 07:27:46 +0000 (0:00:05.370) 0:02:44.841 ****** 2025-09-19 07:29:52.007887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:29:52.007899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:29:52.007910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.007921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.007932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:29:52.007943 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:29:52.007965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:29:52.007984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:29:52.007996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.008007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.008018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:29:52.008029 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:29:52.008040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:29:52.008064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:29:52.008079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.008089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.008099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:29:52.008109 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:29:52.008118 | orchestrator | 2025-09-19 07:29:52.008128 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-19 07:29:52.008137 | orchestrator | Friday 19 September 2025 07:27:47 +0000 (0:00:00.985) 0:02:45.827 ****** 2025-09-19 07:29:52.008147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:29:52.008157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:29:52.008174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.008195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.008206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:29:52.008216 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:29:52.008225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:29:52.008235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:29:52.008245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.008265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.008286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:29:52.008296 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:29:52.008306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 07:29:52.008316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 07:29:52.008326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.008336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 07:29:52.008352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 07:29:52.008362 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:29:52.008372 | orchestrator | 2025-09-19 07:29:52.008382 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-19 07:29:52.008391 | orchestrator | Friday 19 September 2025 07:27:48 +0000 (0:00:00.964) 0:02:46.791 ****** 2025-09-19 07:29:52.008413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.008424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.008434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.008452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.008462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.008473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.008507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008617 | orchestrator | 2025-09-19 07:29:52.008627 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-19 07:29:52.008636 | orchestrator | Friday 19 September 2025 07:27:53 +0000 (0:00:05.186) 0:02:51.977 ****** 2025-09-19 07:29:52.008646 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-19 07:29:52.008656 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-19 07:29:52.008666 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-19 07:29:52.008675 | orchestrator | 2025-09-19 07:29:52.008685 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-19 07:29:52.008694 | orchestrator | Friday 19 September 2025 07:27:56 +0000 (0:00:02.143) 0:02:54.120 ****** 2025-09-19 07:29:52.008712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.008722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.008744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.008755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.008765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.008775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.008791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.008897 | orchestrator | 2025-09-19 07:29:52.008907 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-19 07:29:52.008916 | orchestrator | Friday 19 September 2025 07:28:12 +0000 (0:00:16.527) 0:03:10.647 ****** 2025-09-19 07:29:52.008926 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.008936 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:29:52.008945 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:29:52.008955 | orchestrator | 2025-09-19 07:29:52.008964 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-19 07:29:52.008974 | orchestrator | Friday 19 September 2025 07:28:14 +0000 (0:00:01.542) 0:03:12.190 ****** 2025-09-19 07:29:52.008983 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-19 07:29:52.008993 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-19 07:29:52.009007 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-19 07:29:52.009017 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-19 07:29:52.009032 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-19 07:29:52.009041 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-19 07:29:52.009051 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-19 07:29:52.009060 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-19 07:29:52.009069 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-19 07:29:52.009078 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-19 07:29:52.009088 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-19 07:29:52.009097 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-19 07:29:52.009106 | orchestrator | 2025-09-19 07:29:52.009116 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-19 07:29:52.009125 | orchestrator | Friday 19 September 2025 07:28:19 +0000 (0:00:05.271) 0:03:17.462 ****** 2025-09-19 07:29:52.009141 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-19 07:29:52.009151 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-19 07:29:52.009160 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-19 07:29:52.009170 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-19 07:29:52.009179 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-19 07:29:52.009188 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-19 07:29:52.009198 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-19 07:29:52.009208 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-19 07:29:52.009217 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-19 07:29:52.009226 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-19 07:29:52.009235 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-19 07:29:52.009245 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-19 07:29:52.009254 | orchestrator | 2025-09-19 07:29:52.009264 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-19 07:29:52.009274 | orchestrator | Friday 19 September 2025 07:28:24 +0000 (0:00:05.254) 0:03:22.716 ****** 2025-09-19 07:29:52.009283 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-19 07:29:52.009292 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-19 07:29:52.009302 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-19 07:29:52.009311 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-19 07:29:52.009320 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-19 07:29:52.009330 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-19 07:29:52.009339 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-19 07:29:52.009348 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-19 07:29:52.009357 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-19 07:29:52.009367 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-19 07:29:52.009376 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-19 07:29:52.009385 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-19 07:29:52.009394 | orchestrator | 2025-09-19 07:29:52.009404 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-19 07:29:52.009413 | orchestrator | Friday 19 September 2025 07:28:30 +0000 (0:00:05.427) 0:03:28.144 ****** 2025-09-19 07:29:52.009423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.009448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.009464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 07:29:52.009474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.009484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.009508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 07:29:52.009518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.009534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.009554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.009564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.009574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.009584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 07:29:52.009594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.009604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.009629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 07:29:52.009639 | orchestrator | 2025-09-19 07:29:52.009649 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 07:29:52.009659 | orchestrator | Friday 19 September 2025 07:28:33 +0000 (0:00:03.799) 0:03:31.943 ****** 2025-09-19 07:29:52.009668 | orchestrator | skipping: [testbed-node-0] 2025-09-19 07:29:52.009678 | orchestrator | skipping: [testbed-node-1] 2025-09-19 07:29:52.009687 | orchestrator | skipping: [testbed-node-2] 2025-09-19 07:29:52.009697 | orchestrator | 2025-09-19 07:29:52.009706 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-19 07:29:52.009716 | orchestrator | Friday 19 September 2025 07:28:34 +0000 (0:00:00.328) 0:03:32.272 ****** 2025-09-19 07:29:52.009725 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.009734 | orchestrator | 2025-09-19 07:29:52.009744 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-19 07:29:52.009754 | orchestrator | Friday 19 September 2025 07:28:36 +0000 (0:00:02.292) 0:03:34.565 ****** 2025-09-19 07:29:52.009763 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.009772 | orchestrator | 2025-09-19 07:29:52.009782 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-19 07:29:52.009791 | orchestrator | Friday 19 September 2025 07:28:38 +0000 (0:00:02.140) 0:03:36.705 ****** 2025-09-19 07:29:52.009801 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.009810 | orchestrator | 2025-09-19 07:29:52.009820 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-19 07:29:52.009829 | orchestrator | Friday 19 September 2025 07:28:40 +0000 (0:00:02.188) 0:03:38.894 ****** 2025-09-19 07:29:52.009839 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.009848 | orchestrator | 2025-09-19 07:29:52.009858 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-19 07:29:52.009867 | orchestrator | Friday 19 September 2025 07:28:42 +0000 (0:00:02.167) 0:03:41.062 ****** 2025-09-19 07:29:52.009876 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.009886 | orchestrator | 2025-09-19 07:29:52.009895 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-19 07:29:52.009905 | orchestrator | Friday 19 September 2025 07:29:05 +0000 (0:00:22.625) 0:04:03.688 ****** 2025-09-19 07:29:52.009914 | orchestrator | 2025-09-19 07:29:52.009924 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-19 07:29:52.009933 | orchestrator | Friday 19 September 2025 07:29:05 +0000 (0:00:00.068) 0:04:03.756 ****** 2025-09-19 07:29:52.009942 | orchestrator | 2025-09-19 07:29:52.009952 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-19 07:29:52.009961 | orchestrator | Friday 19 September 2025 07:29:05 +0000 (0:00:00.068) 0:04:03.824 ****** 2025-09-19 07:29:52.009971 | orchestrator | 2025-09-19 07:29:52.009980 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-19 07:29:52.009990 | orchestrator | Friday 19 September 2025 07:29:05 +0000 (0:00:00.064) 0:04:03.888 ****** 2025-09-19 07:29:52.009999 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.010009 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:29:52.010044 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:29:52.010055 | orchestrator | 2025-09-19 07:29:52.010065 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-19 07:29:52.010074 | orchestrator | Friday 19 September 2025 07:29:17 +0000 (0:00:11.604) 0:04:15.493 ****** 2025-09-19 07:29:52.010090 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.010100 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:29:52.010109 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:29:52.010118 | orchestrator | 2025-09-19 07:29:52.010128 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-19 07:29:52.010137 | orchestrator | Friday 19 September 2025 07:29:28 +0000 (0:00:11.478) 0:04:26.971 ****** 2025-09-19 07:29:52.010147 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.010156 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:29:52.010166 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:29:52.010175 | orchestrator | 2025-09-19 07:29:52.010185 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-19 07:29:52.010194 | orchestrator | Friday 19 September 2025 07:29:34 +0000 (0:00:05.577) 0:04:32.549 ****** 2025-09-19 07:29:52.010203 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.010213 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:29:52.010222 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:29:52.010231 | orchestrator | 2025-09-19 07:29:52.010241 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-19 07:29:52.010250 | orchestrator | Friday 19 September 2025 07:29:39 +0000 (0:00:05.413) 0:04:37.962 ****** 2025-09-19 07:29:52.010260 | orchestrator | changed: [testbed-node-0] 2025-09-19 07:29:52.010269 | orchestrator | changed: [testbed-node-1] 2025-09-19 07:29:52.010278 | orchestrator | changed: [testbed-node-2] 2025-09-19 07:29:52.010288 | orchestrator | 2025-09-19 07:29:52.010297 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:29:52.010307 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 07:29:52.010317 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:29:52.010327 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 07:29:52.010336 | orchestrator | 2025-09-19 07:29:52.010346 | orchestrator | 2025-09-19 07:29:52.010355 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:29:52.010365 | orchestrator | Friday 19 September 2025 07:29:50 +0000 (0:00:10.585) 0:04:48.548 ****** 2025-09-19 07:29:52.010380 | orchestrator | =============================================================================== 2025-09-19 07:29:52.010390 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.63s 2025-09-19 07:29:52.010404 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.25s 2025-09-19 07:29:52.010414 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.53s 2025-09-19 07:29:52.010423 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.36s 2025-09-19 07:29:52.010433 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.60s 2025-09-19 07:29:52.010442 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.48s 2025-09-19 07:29:52.010452 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.59s 2025-09-19 07:29:52.010461 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.24s 2025-09-19 07:29:52.010471 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.91s 2025-09-19 07:29:52.010480 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.39s 2025-09-19 07:29:52.010510 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.72s 2025-09-19 07:29:52.010520 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.36s 2025-09-19 07:29:52.010529 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.83s 2025-09-19 07:29:52.010545 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.58s 2025-09-19 07:29:52.010554 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.46s 2025-09-19 07:29:52.010564 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.43s 2025-09-19 07:29:52.010573 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.41s 2025-09-19 07:29:52.010583 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.37s 2025-09-19 07:29:52.010592 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.27s 2025-09-19 07:29:52.010602 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.25s 2025-09-19 07:29:52.010611 | orchestrator | 2025-09-19 07:29:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:29:55.062844 | orchestrator | 2025-09-19 07:29:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:29:58.099579 | orchestrator | 2025-09-19 07:29:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:01.137282 | orchestrator | 2025-09-19 07:30:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:04.178805 | orchestrator | 2025-09-19 07:30:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:07.218693 | orchestrator | 2025-09-19 07:30:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:10.255274 | orchestrator | 2025-09-19 07:30:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:13.299886 | orchestrator | 2025-09-19 07:30:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:16.341490 | orchestrator | 2025-09-19 07:30:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:19.385245 | orchestrator | 2025-09-19 07:30:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:22.434302 | orchestrator | 2025-09-19 07:30:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:25.485343 | orchestrator | 2025-09-19 07:30:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:28.527239 | orchestrator | 2025-09-19 07:30:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:31.559571 | orchestrator | 2025-09-19 07:30:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:34.600272 | orchestrator | 2025-09-19 07:30:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:37.644328 | orchestrator | 2025-09-19 07:30:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:40.688479 | orchestrator | 2025-09-19 07:30:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:43.735585 | orchestrator | 2025-09-19 07:30:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:46.781419 | orchestrator | 2025-09-19 07:30:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:49.822218 | orchestrator | 2025-09-19 07:30:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 07:30:52.867638 | orchestrator | 2025-09-19 07:30:53.251332 | orchestrator | 2025-09-19 07:30:53.257863 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Sep 19 07:30:53 UTC 2025 2025-09-19 07:30:53.257903 | orchestrator | 2025-09-19 07:30:53.639189 | orchestrator | ok: Runtime: 0:34:45.561517 2025-09-19 07:30:53.896845 | 2025-09-19 07:30:53.896999 | TASK [Bootstrap services] 2025-09-19 07:30:54.660680 | orchestrator | 2025-09-19 07:30:54.660863 | orchestrator | # BOOTSTRAP 2025-09-19 07:30:54.660885 | orchestrator | 2025-09-19 07:30:54.660899 | orchestrator | + set -e 2025-09-19 07:30:54.660913 | orchestrator | + echo 2025-09-19 07:30:54.660927 | orchestrator | + echo '# BOOTSTRAP' 2025-09-19 07:30:54.660944 | orchestrator | + echo 2025-09-19 07:30:54.660990 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-19 07:30:54.670827 | orchestrator | + set -e 2025-09-19 07:30:54.670863 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-19 07:30:56.990308 | orchestrator | 2025-09-19 07:30:56 | INFO  | It takes a moment until task 1ad5a928-0c35-40e6-bd4e-7157ba8c1e10 (flavor-manager) has been started and output is visible here. 2025-09-19 07:31:05.351082 | orchestrator | 2025-09-19 07:31:00 | INFO  | Flavor SCS-1L-1 created 2025-09-19 07:31:05.351209 | orchestrator | 2025-09-19 07:31:00 | INFO  | Flavor SCS-1L-1-5 created 2025-09-19 07:31:05.351225 | orchestrator | 2025-09-19 07:31:00 | INFO  | Flavor SCS-1V-2 created 2025-09-19 07:31:05.351236 | orchestrator | 2025-09-19 07:31:01 | INFO  | Flavor SCS-1V-2-5 created 2025-09-19 07:31:05.351246 | orchestrator | 2025-09-19 07:31:01 | INFO  | Flavor SCS-1V-4 created 2025-09-19 07:31:05.351256 | orchestrator | 2025-09-19 07:31:01 | INFO  | Flavor SCS-1V-4-10 created 2025-09-19 07:31:05.351266 | orchestrator | 2025-09-19 07:31:01 | INFO  | Flavor SCS-1V-8 created 2025-09-19 07:31:05.351277 | orchestrator | 2025-09-19 07:31:01 | INFO  | Flavor SCS-1V-8-20 created 2025-09-19 07:31:05.351296 | orchestrator | 2025-09-19 07:31:01 | INFO  | Flavor SCS-2V-4 created 2025-09-19 07:31:05.351307 | orchestrator | 2025-09-19 07:31:02 | INFO  | Flavor SCS-2V-4-10 created 2025-09-19 07:31:05.351317 | orchestrator | 2025-09-19 07:31:02 | INFO  | Flavor SCS-2V-8 created 2025-09-19 07:31:05.351327 | orchestrator | 2025-09-19 07:31:02 | INFO  | Flavor SCS-2V-8-20 created 2025-09-19 07:31:05.351337 | orchestrator | 2025-09-19 07:31:02 | INFO  | Flavor SCS-2V-16 created 2025-09-19 07:31:05.351346 | orchestrator | 2025-09-19 07:31:02 | INFO  | Flavor SCS-2V-16-50 created 2025-09-19 07:31:05.351356 | orchestrator | 2025-09-19 07:31:02 | INFO  | Flavor SCS-4V-8 created 2025-09-19 07:31:05.351366 | orchestrator | 2025-09-19 07:31:03 | INFO  | Flavor SCS-4V-8-20 created 2025-09-19 07:31:05.351376 | orchestrator | 2025-09-19 07:31:03 | INFO  | Flavor SCS-4V-16 created 2025-09-19 07:31:05.351385 | orchestrator | 2025-09-19 07:31:03 | INFO  | Flavor SCS-4V-16-50 created 2025-09-19 07:31:05.351395 | orchestrator | 2025-09-19 07:31:03 | INFO  | Flavor SCS-4V-32 created 2025-09-19 07:31:05.351405 | orchestrator | 2025-09-19 07:31:03 | INFO  | Flavor SCS-4V-32-100 created 2025-09-19 07:31:05.351415 | orchestrator | 2025-09-19 07:31:03 | INFO  | Flavor SCS-8V-16 created 2025-09-19 07:31:05.351425 | orchestrator | 2025-09-19 07:31:03 | INFO  | Flavor SCS-8V-16-50 created 2025-09-19 07:31:05.351435 | orchestrator | 2025-09-19 07:31:04 | INFO  | Flavor SCS-8V-32 created 2025-09-19 07:31:05.351445 | orchestrator | 2025-09-19 07:31:04 | INFO  | Flavor SCS-8V-32-100 created 2025-09-19 07:31:05.351455 | orchestrator | 2025-09-19 07:31:04 | INFO  | Flavor SCS-16V-32 created 2025-09-19 07:31:05.351464 | orchestrator | 2025-09-19 07:31:04 | INFO  | Flavor SCS-16V-32-100 created 2025-09-19 07:31:05.351474 | orchestrator | 2025-09-19 07:31:04 | INFO  | Flavor SCS-2V-4-20s created 2025-09-19 07:31:05.351484 | orchestrator | 2025-09-19 07:31:04 | INFO  | Flavor SCS-4V-8-50s created 2025-09-19 07:31:05.351494 | orchestrator | 2025-09-19 07:31:05 | INFO  | Flavor SCS-8V-32-100s created 2025-09-19 07:31:07.619470 | orchestrator | 2025-09-19 07:31:07 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-19 07:31:17.817423 | orchestrator | 2025-09-19 07:31:17 | INFO  | Task ed6d7758-a382-4a61-82f4-1e1276296149 (bootstrap-basic) was prepared for execution. 2025-09-19 07:31:17.817529 | orchestrator | 2025-09-19 07:31:17 | INFO  | It takes a moment until task ed6d7758-a382-4a61-82f4-1e1276296149 (bootstrap-basic) has been started and output is visible here. 2025-09-19 07:32:19.015230 | orchestrator | 2025-09-19 07:32:19.015341 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-19 07:32:19.015356 | orchestrator | 2025-09-19 07:32:19.015369 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 07:32:19.015380 | orchestrator | Friday 19 September 2025 07:31:21 +0000 (0:00:00.079) 0:00:00.079 ****** 2025-09-19 07:32:19.015392 | orchestrator | ok: [localhost] 2025-09-19 07:32:19.015404 | orchestrator | 2025-09-19 07:32:19.015415 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-19 07:32:19.015426 | orchestrator | Friday 19 September 2025 07:31:23 +0000 (0:00:01.856) 0:00:01.935 ****** 2025-09-19 07:32:19.015437 | orchestrator | ok: [localhost] 2025-09-19 07:32:19.015448 | orchestrator | 2025-09-19 07:32:19.015460 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-19 07:32:19.015471 | orchestrator | Friday 19 September 2025 07:31:33 +0000 (0:00:09.526) 0:00:11.462 ****** 2025-09-19 07:32:19.015482 | orchestrator | changed: [localhost] 2025-09-19 07:32:19.015493 | orchestrator | 2025-09-19 07:32:19.015504 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-19 07:32:19.015516 | orchestrator | Friday 19 September 2025 07:31:41 +0000 (0:00:07.722) 0:00:19.185 ****** 2025-09-19 07:32:19.015527 | orchestrator | ok: [localhost] 2025-09-19 07:32:19.015538 | orchestrator | 2025-09-19 07:32:19.015549 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-19 07:32:19.015560 | orchestrator | Friday 19 September 2025 07:31:47 +0000 (0:00:06.061) 0:00:25.247 ****** 2025-09-19 07:32:19.015576 | orchestrator | changed: [localhost] 2025-09-19 07:32:19.015587 | orchestrator | 2025-09-19 07:32:19.015598 | orchestrator | TASK [Create public network] *************************************************** 2025-09-19 07:32:19.015653 | orchestrator | Friday 19 September 2025 07:31:54 +0000 (0:00:07.730) 0:00:32.978 ****** 2025-09-19 07:32:19.015667 | orchestrator | changed: [localhost] 2025-09-19 07:32:19.015678 | orchestrator | 2025-09-19 07:32:19.015689 | orchestrator | TASK [Set public network to default] ******************************************* 2025-09-19 07:32:19.015700 | orchestrator | Friday 19 September 2025 07:32:00 +0000 (0:00:05.184) 0:00:38.162 ****** 2025-09-19 07:32:19.015711 | orchestrator | changed: [localhost] 2025-09-19 07:32:19.015722 | orchestrator | 2025-09-19 07:32:19.015733 | orchestrator | TASK [Create public subnet] **************************************************** 2025-09-19 07:32:19.015755 | orchestrator | Friday 19 September 2025 07:32:06 +0000 (0:00:06.535) 0:00:44.698 ****** 2025-09-19 07:32:19.015767 | orchestrator | changed: [localhost] 2025-09-19 07:32:19.015781 | orchestrator | 2025-09-19 07:32:19.015793 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-09-19 07:32:19.015806 | orchestrator | Friday 19 September 2025 07:32:11 +0000 (0:00:04.592) 0:00:49.290 ****** 2025-09-19 07:32:19.015818 | orchestrator | changed: [localhost] 2025-09-19 07:32:19.015831 | orchestrator | 2025-09-19 07:32:19.015844 | orchestrator | TASK [Create manager role] ***************************************************** 2025-09-19 07:32:19.015857 | orchestrator | Friday 19 September 2025 07:32:15 +0000 (0:00:03.924) 0:00:53.215 ****** 2025-09-19 07:32:19.015869 | orchestrator | ok: [localhost] 2025-09-19 07:32:19.015882 | orchestrator | 2025-09-19 07:32:19.015894 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 07:32:19.015908 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 07:32:19.015921 | orchestrator | 2025-09-19 07:32:19.015933 | orchestrator | 2025-09-19 07:32:19.015947 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 07:32:19.015982 | orchestrator | Friday 19 September 2025 07:32:18 +0000 (0:00:03.620) 0:00:56.835 ****** 2025-09-19 07:32:19.015995 | orchestrator | =============================================================================== 2025-09-19 07:32:19.016009 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.53s 2025-09-19 07:32:19.016022 | orchestrator | Create volume type local ------------------------------------------------ 7.73s 2025-09-19 07:32:19.016035 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.72s 2025-09-19 07:32:19.016048 | orchestrator | Set public network to default ------------------------------------------- 6.54s 2025-09-19 07:32:19.016060 | orchestrator | Get volume type local --------------------------------------------------- 6.06s 2025-09-19 07:32:19.016073 | orchestrator | Create public network --------------------------------------------------- 5.18s 2025-09-19 07:32:19.016085 | orchestrator | Create public subnet ---------------------------------------------------- 4.59s 2025-09-19 07:32:19.016097 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.92s 2025-09-19 07:32:19.016110 | orchestrator | Create manager role ----------------------------------------------------- 3.62s 2025-09-19 07:32:19.016123 | orchestrator | Gathering Facts --------------------------------------------------------- 1.86s 2025-09-19 07:32:21.457568 | orchestrator | 2025-09-19 07:32:21 | INFO  | It takes a moment until task 9bc697b0-db24-4b57-a3eb-fcab54729911 (image-manager) has been started and output is visible here. 2025-09-19 07:33:02.801173 | orchestrator | 2025-09-19 07:32:24 | INFO  | Processing image 'Cirros 0.6.2' 2025-09-19 07:33:02.801262 | orchestrator | 2025-09-19 07:32:24 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-09-19 07:33:02.801275 | orchestrator | 2025-09-19 07:32:24 | INFO  | Importing image Cirros 0.6.2 2025-09-19 07:33:02.801283 | orchestrator | 2025-09-19 07:32:24 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-19 07:33:02.801291 | orchestrator | 2025-09-19 07:32:26 | INFO  | Waiting for image to leave queued state... 2025-09-19 07:33:02.801298 | orchestrator | 2025-09-19 07:32:28 | INFO  | Waiting for import to complete... 2025-09-19 07:33:02.801305 | orchestrator | 2025-09-19 07:32:38 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-09-19 07:33:02.801312 | orchestrator | 2025-09-19 07:32:38 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-09-19 07:33:02.801319 | orchestrator | 2025-09-19 07:32:38 | INFO  | Setting internal_version = 0.6.2 2025-09-19 07:33:02.801326 | orchestrator | 2025-09-19 07:32:38 | INFO  | Setting image_original_user = cirros 2025-09-19 07:33:02.801333 | orchestrator | 2025-09-19 07:32:38 | INFO  | Adding tag os:cirros 2025-09-19 07:33:02.801340 | orchestrator | 2025-09-19 07:32:38 | INFO  | Setting property architecture: x86_64 2025-09-19 07:33:02.801346 | orchestrator | 2025-09-19 07:32:39 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 07:33:02.801353 | orchestrator | 2025-09-19 07:32:39 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 07:33:02.801360 | orchestrator | 2025-09-19 07:32:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 07:33:02.801366 | orchestrator | 2025-09-19 07:32:40 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 07:33:02.801373 | orchestrator | 2025-09-19 07:32:40 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 07:33:02.801380 | orchestrator | 2025-09-19 07:32:40 | INFO  | Setting property os_distro: cirros 2025-09-19 07:33:02.801387 | orchestrator | 2025-09-19 07:32:40 | INFO  | Setting property os_purpose: minimal 2025-09-19 07:33:02.801393 | orchestrator | 2025-09-19 07:32:40 | INFO  | Setting property replace_frequency: never 2025-09-19 07:33:02.801418 | orchestrator | 2025-09-19 07:32:41 | INFO  | Setting property uuid_validity: none 2025-09-19 07:33:02.801424 | orchestrator | 2025-09-19 07:32:41 | INFO  | Setting property provided_until: none 2025-09-19 07:33:02.801436 | orchestrator | 2025-09-19 07:32:41 | INFO  | Setting property image_description: Cirros 2025-09-19 07:33:02.801447 | orchestrator | 2025-09-19 07:32:41 | INFO  | Setting property image_name: Cirros 2025-09-19 07:33:02.801454 | orchestrator | 2025-09-19 07:32:42 | INFO  | Setting property internal_version: 0.6.2 2025-09-19 07:33:02.801461 | orchestrator | 2025-09-19 07:32:42 | INFO  | Setting property image_original_user: cirros 2025-09-19 07:33:02.801468 | orchestrator | 2025-09-19 07:32:42 | INFO  | Setting property os_version: 0.6.2 2025-09-19 07:33:02.801475 | orchestrator | 2025-09-19 07:32:42 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-19 07:33:02.801483 | orchestrator | 2025-09-19 07:32:43 | INFO  | Setting property image_build_date: 2023-05-30 2025-09-19 07:33:02.801489 | orchestrator | 2025-09-19 07:32:43 | INFO  | Checking status of 'Cirros 0.6.2' 2025-09-19 07:33:02.801496 | orchestrator | 2025-09-19 07:32:43 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-09-19 07:33:02.801503 | orchestrator | 2025-09-19 07:32:43 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-09-19 07:33:02.801509 | orchestrator | 2025-09-19 07:32:43 | INFO  | Processing image 'Cirros 0.6.3' 2025-09-19 07:33:02.801516 | orchestrator | 2025-09-19 07:32:43 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-09-19 07:33:02.801523 | orchestrator | 2025-09-19 07:32:43 | INFO  | Importing image Cirros 0.6.3 2025-09-19 07:33:02.801530 | orchestrator | 2025-09-19 07:32:43 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-19 07:33:02.801536 | orchestrator | 2025-09-19 07:32:44 | INFO  | Waiting for image to leave queued state... 2025-09-19 07:33:02.801543 | orchestrator | 2025-09-19 07:32:47 | INFO  | Waiting for import to complete... 2025-09-19 07:33:02.801561 | orchestrator | 2025-09-19 07:32:57 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-09-19 07:33:02.801569 | orchestrator | 2025-09-19 07:32:57 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-09-19 07:33:02.801576 | orchestrator | 2025-09-19 07:32:57 | INFO  | Setting internal_version = 0.6.3 2025-09-19 07:33:02.801582 | orchestrator | 2025-09-19 07:32:57 | INFO  | Setting image_original_user = cirros 2025-09-19 07:33:02.801589 | orchestrator | 2025-09-19 07:32:57 | INFO  | Adding tag os:cirros 2025-09-19 07:33:02.801595 | orchestrator | 2025-09-19 07:32:57 | INFO  | Setting property architecture: x86_64 2025-09-19 07:33:02.801602 | orchestrator | 2025-09-19 07:32:58 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 07:33:02.801609 | orchestrator | 2025-09-19 07:32:58 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 07:33:02.801615 | orchestrator | 2025-09-19 07:32:58 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 07:33:02.801622 | orchestrator | 2025-09-19 07:32:58 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 07:33:02.801629 | orchestrator | 2025-09-19 07:32:59 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 07:33:02.801635 | orchestrator | 2025-09-19 07:32:59 | INFO  | Setting property os_distro: cirros 2025-09-19 07:33:02.801689 | orchestrator | 2025-09-19 07:32:59 | INFO  | Setting property os_purpose: minimal 2025-09-19 07:33:02.801696 | orchestrator | 2025-09-19 07:32:59 | INFO  | Setting property replace_frequency: never 2025-09-19 07:33:02.801703 | orchestrator | 2025-09-19 07:32:59 | INFO  | Setting property uuid_validity: none 2025-09-19 07:33:02.801711 | orchestrator | 2025-09-19 07:33:00 | INFO  | Setting property provided_until: none 2025-09-19 07:33:02.801718 | orchestrator | 2025-09-19 07:33:00 | INFO  | Setting property image_description: Cirros 2025-09-19 07:33:02.801726 | orchestrator | 2025-09-19 07:33:00 | INFO  | Setting property image_name: Cirros 2025-09-19 07:33:02.801734 | orchestrator | 2025-09-19 07:33:00 | INFO  | Setting property internal_version: 0.6.3 2025-09-19 07:33:02.801742 | orchestrator | 2025-09-19 07:33:01 | INFO  | Setting property image_original_user: cirros 2025-09-19 07:33:02.801749 | orchestrator | 2025-09-19 07:33:01 | INFO  | Setting property os_version: 0.6.3 2025-09-19 07:33:02.801758 | orchestrator | 2025-09-19 07:33:01 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-19 07:33:02.801765 | orchestrator | 2025-09-19 07:33:01 | INFO  | Setting property image_build_date: 2024-09-26 2025-09-19 07:33:02.801777 | orchestrator | 2025-09-19 07:33:01 | INFO  | Checking status of 'Cirros 0.6.3' 2025-09-19 07:33:02.801786 | orchestrator | 2025-09-19 07:33:01 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-09-19 07:33:02.801793 | orchestrator | 2025-09-19 07:33:01 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-09-19 07:33:03.240548 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-09-19 07:33:05.791815 | orchestrator | 2025-09-19 07:33:05 | INFO  | date: 2025-09-19 2025-09-19 07:33:05.791911 | orchestrator | 2025-09-19 07:33:05 | INFO  | image: octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 07:33:05.791928 | orchestrator | 2025-09-19 07:33:05 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 07:33:05.791959 | orchestrator | 2025-09-19 07:33:05 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2.CHECKSUM 2025-09-19 07:33:05.822602 | orchestrator | 2025-09-19 07:33:05 | INFO  | checksum: cb1f8a9bf0aeb0e92074b04499e688b0043001241167a8bf8df49931cc66885f 2025-09-19 07:33:05.891133 | orchestrator | 2025-09-19 07:33:05 | INFO  | It takes a moment until task ba50a45d-1a19-435a-815b-3af09655f996 (image-manager) has been started and output is visible here. 2025-09-19 07:33:06.904604 | orchestrator | 2025-09-19 07:33:06 | ERROR  | Error validating data '/tmp/tmph59sf4uu/tmp7700krx6.yml' with 'None' 2025-09-19 07:33:06.904784 | orchestrator | 2025-09-19 07:33:06 | ERROR  |  images.0.meta.os_purpose: Required field missing 2025-09-19 07:33:06.904804 | orchestrator | Image definition validation failed with these error(s): [('/tmp/tmph59sf4uu/tmp7700krx6.yml', 'images.0.meta.os_purpose: Required field missing')] 2025-09-19 07:33:07.523574 | orchestrator | ERROR 2025-09-19 07:33:07.523894 | orchestrator | { 2025-09-19 07:33:07.523947 | orchestrator | "delta": "0:02:13.093519", 2025-09-19 07:33:07.523983 | orchestrator | "end": "2025-09-19 07:33:07.362011", 2025-09-19 07:33:07.524014 | orchestrator | "msg": "non-zero return code", 2025-09-19 07:33:07.524043 | orchestrator | "rc": 1, 2025-09-19 07:33:07.524072 | orchestrator | "start": "2025-09-19 07:30:54.268492" 2025-09-19 07:33:07.524099 | orchestrator | } failure 2025-09-19 07:33:07.533979 | 2025-09-19 07:33:07.534090 | PLAY RECAP 2025-09-19 07:33:07.534150 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-19 07:33:07.534180 | 2025-09-19 07:33:07.841513 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-19 07:33:07.846915 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-19 07:33:09.485149 | 2025-09-19 07:33:09.485313 | PLAY [Post output play] 2025-09-19 07:33:09.510951 | 2025-09-19 07:33:09.511090 | LOOP [stage-output : Register sources] 2025-09-19 07:33:09.615096 | 2025-09-19 07:33:09.615465 | TASK [stage-output : Check sudo] 2025-09-19 07:33:10.467770 | orchestrator | sudo: a password is required 2025-09-19 07:33:10.657006 | orchestrator | ok: Runtime: 0:00:00.010793 2025-09-19 07:33:10.672454 | 2025-09-19 07:33:10.672610 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-19 07:33:10.712942 | 2025-09-19 07:33:10.713248 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-19 07:33:10.792180 | orchestrator | ok 2025-09-19 07:33:10.801439 | 2025-09-19 07:33:10.801569 | LOOP [stage-output : Ensure target folders exist] 2025-09-19 07:33:11.282111 | orchestrator | ok: "docs" 2025-09-19 07:33:11.282469 | 2025-09-19 07:33:11.547942 | orchestrator | ok: "artifacts" 2025-09-19 07:33:11.791617 | orchestrator | ok: "logs" 2025-09-19 07:33:11.813822 | 2025-09-19 07:33:11.814005 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-19 07:33:11.851857 | 2025-09-19 07:33:11.852157 | TASK [stage-output : Make all log files readable] 2025-09-19 07:33:12.129557 | orchestrator | ok 2025-09-19 07:33:12.139902 | 2025-09-19 07:33:12.140042 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-19 07:33:12.184918 | orchestrator | skipping: Conditional result was False 2025-09-19 07:33:12.201452 | 2025-09-19 07:33:12.201606 | TASK [stage-output : Discover log files for compression] 2025-09-19 07:33:12.226037 | orchestrator | skipping: Conditional result was False 2025-09-19 07:33:12.239454 | 2025-09-19 07:33:12.239587 | LOOP [stage-output : Archive everything from logs] 2025-09-19 07:33:12.280464 | 2025-09-19 07:33:12.280611 | PLAY [Post cleanup play] 2025-09-19 07:33:12.288430 | 2025-09-19 07:33:12.288537 | TASK [Set cloud fact (Zuul deployment)] 2025-09-19 07:33:12.344887 | orchestrator | ok 2025-09-19 07:33:12.356351 | 2025-09-19 07:33:12.356470 | TASK [Set cloud fact (local deployment)] 2025-09-19 07:33:12.390220 | orchestrator | skipping: Conditional result was False 2025-09-19 07:33:12.403871 | 2025-09-19 07:33:12.404031 | TASK [Clean the cloud environment] 2025-09-19 07:33:12.970499 | orchestrator | 2025-09-19 07:33:12 - clean up servers 2025-09-19 07:33:13.729085 | orchestrator | 2025-09-19 07:33:13 - testbed-manager 2025-09-19 07:33:13.817914 | orchestrator | 2025-09-19 07:33:13 - testbed-node-3 2025-09-19 07:33:13.908234 | orchestrator | 2025-09-19 07:33:13 - testbed-node-4 2025-09-19 07:33:14.011716 | orchestrator | 2025-09-19 07:33:14 - testbed-node-2 2025-09-19 07:33:14.104144 | orchestrator | 2025-09-19 07:33:14 - testbed-node-5 2025-09-19 07:33:14.194565 | orchestrator | 2025-09-19 07:33:14 - testbed-node-0 2025-09-19 07:33:14.277235 | orchestrator | 2025-09-19 07:33:14 - testbed-node-1 2025-09-19 07:33:14.362624 | orchestrator | 2025-09-19 07:33:14 - clean up keypairs 2025-09-19 07:33:14.380762 | orchestrator | 2025-09-19 07:33:14 - testbed 2025-09-19 07:33:14.403048 | orchestrator | 2025-09-19 07:33:14 - wait for servers to be gone 2025-09-19 07:33:25.230635 | orchestrator | 2025-09-19 07:33:25 - clean up ports 2025-09-19 07:33:25.435031 | orchestrator | 2025-09-19 07:33:25 - 0d861418-88af-4f7c-ae36-bda95b00e6a8 2025-09-19 07:33:25.682468 | orchestrator | 2025-09-19 07:33:25 - 4dd65f6d-316e-4de9-9c92-d1dd4ece4225 2025-09-19 07:33:26.140536 | orchestrator | 2025-09-19 07:33:26 - 52ffb3ae-9c33-4f38-8c3a-4759b7ddc27b 2025-09-19 07:33:26.482939 | orchestrator | 2025-09-19 07:33:26 - a39759e9-60c4-478a-a2b4-8a556db4f1f4 2025-09-19 07:33:26.854191 | orchestrator | 2025-09-19 07:33:26 - b36e2575-941f-447c-b53f-ba68f3d04f1d 2025-09-19 07:33:27.070728 | orchestrator | 2025-09-19 07:33:27 - e2cd11fc-d85e-4470-88e1-8f6d95cc6539 2025-09-19 07:33:27.276569 | orchestrator | 2025-09-19 07:33:27 - f4954f3e-6b74-4831-aa6d-ade5d01bd25a 2025-09-19 07:33:27.484836 | orchestrator | 2025-09-19 07:33:27 - clean up volumes 2025-09-19 07:33:27.612108 | orchestrator | 2025-09-19 07:33:27 - testbed-volume-5-node-base 2025-09-19 07:33:27.652528 | orchestrator | 2025-09-19 07:33:27 - testbed-volume-3-node-base 2025-09-19 07:33:27.700034 | orchestrator | 2025-09-19 07:33:27 - testbed-volume-4-node-base 2025-09-19 07:33:27.740572 | orchestrator | 2025-09-19 07:33:27 - testbed-volume-0-node-base 2025-09-19 07:33:27.780324 | orchestrator | 2025-09-19 07:33:27 - testbed-volume-2-node-base 2025-09-19 07:33:27.823844 | orchestrator | 2025-09-19 07:33:27 - testbed-volume-1-node-base 2025-09-19 07:33:27.866822 | orchestrator | 2025-09-19 07:33:27 - testbed-volume-8-node-5 2025-09-19 07:33:27.914362 | orchestrator | 2025-09-19 07:33:27 - testbed-volume-4-node-4 2025-09-19 07:33:27.959896 | orchestrator | 2025-09-19 07:33:27 - testbed-volume-7-node-4 2025-09-19 07:33:28.008477 | orchestrator | 2025-09-19 07:33:28 - testbed-volume-manager-base 2025-09-19 07:33:28.052959 | orchestrator | 2025-09-19 07:33:28 - testbed-volume-5-node-5 2025-09-19 07:33:28.095580 | orchestrator | 2025-09-19 07:33:28 - testbed-volume-3-node-3 2025-09-19 07:33:28.140872 | orchestrator | 2025-09-19 07:33:28 - testbed-volume-6-node-3 2025-09-19 07:33:28.181948 | orchestrator | 2025-09-19 07:33:28 - testbed-volume-1-node-4 2025-09-19 07:33:28.221714 | orchestrator | 2025-09-19 07:33:28 - testbed-volume-2-node-5 2025-09-19 07:33:28.266645 | orchestrator | 2025-09-19 07:33:28 - testbed-volume-0-node-3 2025-09-19 07:33:28.311686 | orchestrator | 2025-09-19 07:33:28 - disconnect routers 2025-09-19 07:33:28.440372 | orchestrator | 2025-09-19 07:33:28 - testbed 2025-09-19 07:33:29.912997 | orchestrator | 2025-09-19 07:33:29 - clean up subnets 2025-09-19 07:33:29.960219 | orchestrator | 2025-09-19 07:33:29 - subnet-testbed-management 2025-09-19 07:33:30.111556 | orchestrator | 2025-09-19 07:33:30 - clean up networks 2025-09-19 07:33:30.289345 | orchestrator | 2025-09-19 07:33:30 - net-testbed-management 2025-09-19 07:33:30.573080 | orchestrator | 2025-09-19 07:33:30 - clean up security groups 2025-09-19 07:33:30.618149 | orchestrator | 2025-09-19 07:33:30 - testbed-node 2025-09-19 07:33:30.725652 | orchestrator | 2025-09-19 07:33:30 - testbed-management 2025-09-19 07:33:31.326212 | orchestrator | 2025-09-19 07:33:31 - clean up floating ips 2025-09-19 07:33:31.360179 | orchestrator | 2025-09-19 07:33:31 - 81.163.193.132 2025-09-19 07:33:31.700909 | orchestrator | 2025-09-19 07:33:31 - clean up routers 2025-09-19 07:33:31.761968 | orchestrator | 2025-09-19 07:33:31 - testbed 2025-09-19 07:33:32.959081 | orchestrator | ok: Runtime: 0:00:19.946093 2025-09-19 07:33:32.963382 | 2025-09-19 07:33:32.963530 | PLAY RECAP 2025-09-19 07:33:32.963660 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-19 07:33:32.963715 | 2025-09-19 07:33:33.097846 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-19 07:33:33.100111 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-19 07:33:33.807462 | 2025-09-19 07:33:33.807691 | PLAY [Cleanup play] 2025-09-19 07:33:33.825836 | 2025-09-19 07:33:33.825978 | TASK [Set cloud fact (Zuul deployment)] 2025-09-19 07:33:33.883325 | orchestrator | ok 2025-09-19 07:33:33.893348 | 2025-09-19 07:33:33.893497 | TASK [Set cloud fact (local deployment)] 2025-09-19 07:33:33.928665 | orchestrator | skipping: Conditional result was False 2025-09-19 07:33:33.941869 | 2025-09-19 07:33:33.942035 | TASK [Clean the cloud environment] 2025-09-19 07:33:35.048528 | orchestrator | 2025-09-19 07:33:35 - clean up servers 2025-09-19 07:33:35.511694 | orchestrator | 2025-09-19 07:33:35 - clean up keypairs 2025-09-19 07:33:35.528580 | orchestrator | 2025-09-19 07:33:35 - wait for servers to be gone 2025-09-19 07:33:35.565950 | orchestrator | 2025-09-19 07:33:35 - clean up ports 2025-09-19 07:33:35.636479 | orchestrator | 2025-09-19 07:33:35 - clean up volumes 2025-09-19 07:33:35.695441 | orchestrator | 2025-09-19 07:33:35 - disconnect routers 2025-09-19 07:33:35.723117 | orchestrator | 2025-09-19 07:33:35 - clean up subnets 2025-09-19 07:33:35.743925 | orchestrator | 2025-09-19 07:33:35 - clean up networks 2025-09-19 07:33:35.862255 | orchestrator | 2025-09-19 07:33:35 - clean up security groups 2025-09-19 07:33:35.903807 | orchestrator | 2025-09-19 07:33:35 - clean up floating ips 2025-09-19 07:33:35.927191 | orchestrator | 2025-09-19 07:33:35 - clean up routers 2025-09-19 07:33:36.483354 | orchestrator | ok: Runtime: 0:00:01.252815 2025-09-19 07:33:36.487034 | 2025-09-19 07:33:36.487209 | PLAY RECAP 2025-09-19 07:33:36.487310 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-19 07:33:36.487361 | 2025-09-19 07:33:36.613281 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-19 07:33:36.615463 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-19 07:33:37.425113 | 2025-09-19 07:33:37.425288 | PLAY [Base post-fetch] 2025-09-19 07:33:37.441006 | 2025-09-19 07:33:37.441207 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-19 07:33:37.506433 | orchestrator | skipping: Conditional result was False 2025-09-19 07:33:37.513414 | 2025-09-19 07:33:37.513573 | TASK [fetch-output : Set log path for single node] 2025-09-19 07:33:37.566414 | orchestrator | ok 2025-09-19 07:33:37.573263 | 2025-09-19 07:33:37.573399 | LOOP [fetch-output : Ensure local output dirs] 2025-09-19 07:33:38.051550 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/56e5b89bf4ee4e74bb04767862d53916/work/logs" 2025-09-19 07:33:38.334141 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/56e5b89bf4ee4e74bb04767862d53916/work/artifacts" 2025-09-19 07:33:38.593408 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/56e5b89bf4ee4e74bb04767862d53916/work/docs" 2025-09-19 07:33:38.622548 | 2025-09-19 07:33:38.622724 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-19 07:33:39.537047 | orchestrator | changed: .d..t...... ./ 2025-09-19 07:33:39.537338 | orchestrator | changed: All items complete 2025-09-19 07:33:39.537382 | 2025-09-19 07:33:40.277187 | orchestrator | changed: .d..t...... ./ 2025-09-19 07:33:41.000148 | orchestrator | changed: .d..t...... ./ 2025-09-19 07:33:41.031091 | 2025-09-19 07:33:41.031257 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-19 07:33:41.068526 | orchestrator | skipping: Conditional result was False 2025-09-19 07:33:41.071197 | orchestrator | skipping: Conditional result was False 2025-09-19 07:33:41.095009 | 2025-09-19 07:33:41.095119 | PLAY RECAP 2025-09-19 07:33:41.095193 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-19 07:33:41.095230 | 2025-09-19 07:33:41.223168 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-19 07:33:41.225732 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-19 07:33:41.949282 | 2025-09-19 07:33:41.949438 | PLAY [Base post] 2025-09-19 07:33:41.963842 | 2025-09-19 07:33:41.963975 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-19 07:33:42.986950 | orchestrator | changed 2025-09-19 07:33:42.997142 | 2025-09-19 07:33:42.997276 | PLAY RECAP 2025-09-19 07:33:42.997355 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-19 07:33:42.997433 | 2025-09-19 07:33:43.115414 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-19 07:33:43.116432 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-19 07:33:43.887706 | 2025-09-19 07:33:43.887867 | PLAY [Base post-logs] 2025-09-19 07:33:43.898241 | 2025-09-19 07:33:43.898368 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-19 07:33:44.353131 | localhost | changed 2025-09-19 07:33:44.378455 | 2025-09-19 07:33:44.378687 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-19 07:33:44.409779 | localhost | ok 2025-09-19 07:33:44.416863 | 2025-09-19 07:33:44.417038 | TASK [Set zuul-log-path fact] 2025-09-19 07:33:44.447097 | localhost | ok 2025-09-19 07:33:44.464351 | 2025-09-19 07:33:44.464540 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-19 07:33:44.501750 | localhost | ok 2025-09-19 07:33:44.507329 | 2025-09-19 07:33:44.507484 | TASK [upload-logs : Create log directories] 2025-09-19 07:33:45.000793 | localhost | changed 2025-09-19 07:33:45.003637 | 2025-09-19 07:33:45.003792 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-19 07:33:45.506308 | localhost -> localhost | ok: Runtime: 0:00:00.008612 2025-09-19 07:33:45.516202 | 2025-09-19 07:33:45.516396 | TASK [upload-logs : Upload logs to log server] 2025-09-19 07:33:46.072713 | localhost | Output suppressed because no_log was given 2025-09-19 07:33:46.077063 | 2025-09-19 07:33:46.077281 | LOOP [upload-logs : Compress console log and json output] 2025-09-19 07:33:46.132253 | localhost | skipping: Conditional result was False 2025-09-19 07:33:46.137072 | localhost | skipping: Conditional result was False 2025-09-19 07:33:46.149522 | 2025-09-19 07:33:46.149816 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-19 07:33:46.198356 | localhost | skipping: Conditional result was False 2025-09-19 07:33:46.198980 | 2025-09-19 07:33:46.200817 | localhost | skipping: Conditional result was False 2025-09-19 07:33:46.208268 | 2025-09-19 07:33:46.208533 | LOOP [upload-logs : Upload console log and json output]